Test Report: KVM_Linux_crio 18014

                    
                      3348142c74a021d65da8da3e7947dbd5f1375456:2024-01-30:32819
                    
                

Test fail (29/310)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.52
53 TestAddons/StoppedEnableDisable 154.2
81 TestFunctional/serial/CacheCmd/cache/add_local 0.82
131 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.92
132 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.78
144 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.62
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.38
148 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
169 TestIngressAddonLegacy/serial/ValidateIngressAddons 181.18
224 TestMultiNode/serial/RestartKeepsNodes 687.97
226 TestMultiNode/serial/StopMultiNode 142.18
233 TestPreload 279.38
293 TestStartStop/group/old-k8s-version/serial/Stop 138.77
298 TestStartStop/group/no-preload/serial/Stop 138.9
299 TestStartStop/group/embed-certs/serial/Stop 138.87
302 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.96
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
305 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
306 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
309 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
311 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.3
312 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.39
313 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.36
314 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.33
315 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 513.03
316 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 327.62
317 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 68.56
320 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 71.21
x
+
TestAddons/parallel/Ingress (154.52s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-444608 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-444608 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-444608 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [26f76318-e1c6-4db9-8edd-412294dd7aa8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [26f76318-e1c6-4db9-8edd-412294dd7aa8] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.006400561s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-444608 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m8.615713092s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-444608 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.85
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable ingress-dns --alsologtostderr -v=1: (1.392222715s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable ingress --alsologtostderr -v=1: (8.013130672s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-444608 -n addons-444608
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 logs -n 25: (1.401090746s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-179689                                                                     | download-only-179689 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-659842                                                                     | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-216359                                                                     | download-only-216359 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-179689                                                                     | download-only-179689 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-968156 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | binary-mirror-968156                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:45037                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-968156                                                                     | binary-mirror-968156 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| addons  | enable dashboard -p                                                                         | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | addons-444608                                                                               |                      |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | addons-444608                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-444608 --wait=true                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:04 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-444608 addons                                                                        | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-444608 ssh cat                                                                       | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | /opt/local-path-provisioner/pvc-9b1e24f6-2f64-488b-b79c-cb8ec398703e_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-444608 addons disable                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:05 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-444608 ip                                                                            | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	| addons  | addons-444608 addons disable                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | addons-444608                                                                               |                      |         |         |                     |                     |
	| addons  | addons-444608 addons disable                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:04 UTC |
	|         | addons-444608                                                                               |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:04 UTC | 30 Jan 24 21:05 UTC |
	|         | -p addons-444608                                                                            |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:05 UTC | 30 Jan 24 21:05 UTC |
	|         | -p addons-444608                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-444608 addons                                                                        | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:05 UTC | 30 Jan 24 21:05 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-444608 addons                                                                        | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:05 UTC | 30 Jan 24 21:05 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-444608 ssh curl -s                                                                   | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:05 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| ip      | addons-444608 ip                                                                            | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:07 UTC | 30 Jan 24 21:07 UTC |
	| addons  | addons-444608 addons disable                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:07 UTC | 30 Jan 24 21:07 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-444608 addons disable                                                                | addons-444608        | jenkins | v1.32.0 | 30 Jan 24 21:07 UTC | 30 Jan 24 21:07 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:00:49
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:00:49.158264  648432 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:00:49.158416  648432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:49.158425  648432 out.go:309] Setting ErrFile to fd 2...
	I0130 21:00:49.158430  648432 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:49.158626  648432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:00:49.159289  648432 out.go:303] Setting JSON to false
	I0130 21:00:49.160222  648432 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6201,"bootTime":1706642248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:00:49.160289  648432 start.go:138] virtualization: kvm guest
	I0130 21:00:49.162574  648432 out.go:177] * [addons-444608] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:00:49.163918  648432 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 21:00:49.165221  648432 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:00:49.164007  648432 notify.go:220] Checking for updates...
	I0130 21:00:49.167789  648432 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:00:49.169234  648432 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:49.170771  648432 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 21:00:49.172261  648432 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 21:00:49.173847  648432 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:00:49.206883  648432 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 21:00:49.208428  648432 start.go:298] selected driver: kvm2
	I0130 21:00:49.208448  648432 start.go:902] validating driver "kvm2" against <nil>
	I0130 21:00:49.208460  648432 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 21:00:49.209210  648432 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:49.209300  648432 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 21:00:49.225081  648432 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 21:00:49.225188  648432 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 21:00:49.225458  648432 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 21:00:49.225561  648432 cni.go:84] Creating CNI manager for ""
	I0130 21:00:49.225579  648432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:00:49.225589  648432 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 21:00:49.225600  648432 start_flags.go:321] config:
	{Name:addons-444608 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-444608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:00:49.225752  648432 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:49.227878  648432 out.go:177] * Starting control plane node addons-444608 in cluster addons-444608
	I0130 21:00:49.229280  648432 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:00:49.229328  648432 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 21:00:49.229341  648432 cache.go:56] Caching tarball of preloaded images
	I0130 21:00:49.229428  648432 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 21:00:49.229440  648432 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 21:00:49.229795  648432 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/config.json ...
	I0130 21:00:49.229829  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/config.json: {Name:mk2d7b05a26d24587745807cd3a776980d10503e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:00:49.230011  648432 start.go:365] acquiring machines lock for addons-444608: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:00:49.230078  648432 start.go:369] acquired machines lock for "addons-444608" in 50.967µs
	I0130 21:00:49.230103  648432 start.go:93] Provisioning new machine with config: &{Name:addons-444608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:addons-444608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:00:49.230200  648432 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 21:00:49.232968  648432 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0130 21:00:49.233137  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:00:49.233194  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:00:49.247556  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38669
	I0130 21:00:49.248058  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:00:49.248645  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:00:49.248676  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:00:49.249038  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:00:49.249241  648432 main.go:141] libmachine: (addons-444608) Calling .GetMachineName
	I0130 21:00:49.249412  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:00:49.249582  648432 start.go:159] libmachine.API.Create for "addons-444608" (driver="kvm2")
	I0130 21:00:49.249616  648432 client.go:168] LocalClient.Create starting
	I0130 21:00:49.249664  648432 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem
	I0130 21:00:49.504064  648432 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem
	I0130 21:00:49.680648  648432 main.go:141] libmachine: Running pre-create checks...
	I0130 21:00:49.680681  648432 main.go:141] libmachine: (addons-444608) Calling .PreCreateCheck
	I0130 21:00:49.681318  648432 main.go:141] libmachine: (addons-444608) Calling .GetConfigRaw
	I0130 21:00:49.681929  648432 main.go:141] libmachine: Creating machine...
	I0130 21:00:49.681949  648432 main.go:141] libmachine: (addons-444608) Calling .Create
	I0130 21:00:49.682160  648432 main.go:141] libmachine: (addons-444608) Creating KVM machine...
	I0130 21:00:49.683568  648432 main.go:141] libmachine: (addons-444608) DBG | found existing default KVM network
	I0130 21:00:49.684454  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:49.684251  648454 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a50}
	I0130 21:00:49.691242  648432 main.go:141] libmachine: (addons-444608) DBG | trying to create private KVM network mk-addons-444608 192.168.39.0/24...
	I0130 21:00:49.763379  648432 main.go:141] libmachine: (addons-444608) Setting up store path in /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608 ...
	I0130 21:00:49.763424  648432 main.go:141] libmachine: (addons-444608) DBG | private KVM network mk-addons-444608 192.168.39.0/24 created
	I0130 21:00:49.763438  648432 main.go:141] libmachine: (addons-444608) Building disk image from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 21:00:49.763462  648432 main.go:141] libmachine: (addons-444608) Downloading /home/jenkins/minikube-integration/18014-640473/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 21:00:49.763484  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:49.763254  648454 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:49.997777  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:49.997645  648454 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa...
	I0130 21:00:50.360200  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:50.360021  648454 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/addons-444608.rawdisk...
	I0130 21:00:50.360236  648432 main.go:141] libmachine: (addons-444608) DBG | Writing magic tar header
	I0130 21:00:50.360247  648432 main.go:141] libmachine: (addons-444608) DBG | Writing SSH key tar header
	I0130 21:00:50.360264  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:50.360157  648454 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608 ...
	I0130 21:00:50.360358  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608 (perms=drwx------)
	I0130 21:00:50.360399  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines (perms=drwxr-xr-x)
	I0130 21:00:50.360418  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608
	I0130 21:00:50.360430  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube (perms=drwxr-xr-x)
	I0130 21:00:50.360444  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473 (perms=drwxrwxr-x)
	I0130 21:00:50.360455  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines
	I0130 21:00:50.360462  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 21:00:50.360472  648432 main.go:141] libmachine: (addons-444608) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 21:00:50.360479  648432 main.go:141] libmachine: (addons-444608) Creating domain...
	I0130 21:00:50.360491  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:50.360508  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473
	I0130 21:00:50.360517  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 21:00:50.360523  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home/jenkins
	I0130 21:00:50.360529  648432 main.go:141] libmachine: (addons-444608) DBG | Checking permissions on dir: /home
	I0130 21:00:50.360535  648432 main.go:141] libmachine: (addons-444608) DBG | Skipping /home - not owner
	I0130 21:00:50.361528  648432 main.go:141] libmachine: (addons-444608) define libvirt domain using xml: 
	I0130 21:00:50.361558  648432 main.go:141] libmachine: (addons-444608) <domain type='kvm'>
	I0130 21:00:50.361591  648432 main.go:141] libmachine: (addons-444608)   <name>addons-444608</name>
	I0130 21:00:50.361617  648432 main.go:141] libmachine: (addons-444608)   <memory unit='MiB'>4000</memory>
	I0130 21:00:50.361629  648432 main.go:141] libmachine: (addons-444608)   <vcpu>2</vcpu>
	I0130 21:00:50.361644  648432 main.go:141] libmachine: (addons-444608)   <features>
	I0130 21:00:50.361658  648432 main.go:141] libmachine: (addons-444608)     <acpi/>
	I0130 21:00:50.361669  648432 main.go:141] libmachine: (addons-444608)     <apic/>
	I0130 21:00:50.361678  648432 main.go:141] libmachine: (addons-444608)     <pae/>
	I0130 21:00:50.361688  648432 main.go:141] libmachine: (addons-444608)     
	I0130 21:00:50.361696  648432 main.go:141] libmachine: (addons-444608)   </features>
	I0130 21:00:50.361707  648432 main.go:141] libmachine: (addons-444608)   <cpu mode='host-passthrough'>
	I0130 21:00:50.361719  648432 main.go:141] libmachine: (addons-444608)   
	I0130 21:00:50.361734  648432 main.go:141] libmachine: (addons-444608)   </cpu>
	I0130 21:00:50.361769  648432 main.go:141] libmachine: (addons-444608)   <os>
	I0130 21:00:50.361802  648432 main.go:141] libmachine: (addons-444608)     <type>hvm</type>
	I0130 21:00:50.361827  648432 main.go:141] libmachine: (addons-444608)     <boot dev='cdrom'/>
	I0130 21:00:50.361839  648432 main.go:141] libmachine: (addons-444608)     <boot dev='hd'/>
	I0130 21:00:50.361855  648432 main.go:141] libmachine: (addons-444608)     <bootmenu enable='no'/>
	I0130 21:00:50.361872  648432 main.go:141] libmachine: (addons-444608)   </os>
	I0130 21:00:50.361886  648432 main.go:141] libmachine: (addons-444608)   <devices>
	I0130 21:00:50.361906  648432 main.go:141] libmachine: (addons-444608)     <disk type='file' device='cdrom'>
	I0130 21:00:50.361926  648432 main.go:141] libmachine: (addons-444608)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/boot2docker.iso'/>
	I0130 21:00:50.361943  648432 main.go:141] libmachine: (addons-444608)       <target dev='hdc' bus='scsi'/>
	I0130 21:00:50.361958  648432 main.go:141] libmachine: (addons-444608)       <readonly/>
	I0130 21:00:50.361970  648432 main.go:141] libmachine: (addons-444608)     </disk>
	I0130 21:00:50.361985  648432 main.go:141] libmachine: (addons-444608)     <disk type='file' device='disk'>
	I0130 21:00:50.361998  648432 main.go:141] libmachine: (addons-444608)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 21:00:50.362030  648432 main.go:141] libmachine: (addons-444608)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/addons-444608.rawdisk'/>
	I0130 21:00:50.362052  648432 main.go:141] libmachine: (addons-444608)       <target dev='hda' bus='virtio'/>
	I0130 21:00:50.362070  648432 main.go:141] libmachine: (addons-444608)     </disk>
	I0130 21:00:50.362093  648432 main.go:141] libmachine: (addons-444608)     <interface type='network'>
	I0130 21:00:50.362107  648432 main.go:141] libmachine: (addons-444608)       <source network='mk-addons-444608'/>
	I0130 21:00:50.362119  648432 main.go:141] libmachine: (addons-444608)       <model type='virtio'/>
	I0130 21:00:50.362129  648432 main.go:141] libmachine: (addons-444608)     </interface>
	I0130 21:00:50.362134  648432 main.go:141] libmachine: (addons-444608)     <interface type='network'>
	I0130 21:00:50.362143  648432 main.go:141] libmachine: (addons-444608)       <source network='default'/>
	I0130 21:00:50.362148  648432 main.go:141] libmachine: (addons-444608)       <model type='virtio'/>
	I0130 21:00:50.362154  648432 main.go:141] libmachine: (addons-444608)     </interface>
	I0130 21:00:50.362163  648432 main.go:141] libmachine: (addons-444608)     <serial type='pty'>
	I0130 21:00:50.362178  648432 main.go:141] libmachine: (addons-444608)       <target port='0'/>
	I0130 21:00:50.362190  648432 main.go:141] libmachine: (addons-444608)     </serial>
	I0130 21:00:50.362204  648432 main.go:141] libmachine: (addons-444608)     <console type='pty'>
	I0130 21:00:50.362216  648432 main.go:141] libmachine: (addons-444608)       <target type='serial' port='0'/>
	I0130 21:00:50.362228  648432 main.go:141] libmachine: (addons-444608)     </console>
	I0130 21:00:50.362240  648432 main.go:141] libmachine: (addons-444608)     <rng model='virtio'>
	I0130 21:00:50.362253  648432 main.go:141] libmachine: (addons-444608)       <backend model='random'>/dev/random</backend>
	I0130 21:00:50.362268  648432 main.go:141] libmachine: (addons-444608)     </rng>
	I0130 21:00:50.362281  648432 main.go:141] libmachine: (addons-444608)     
	I0130 21:00:50.362295  648432 main.go:141] libmachine: (addons-444608)     
	I0130 21:00:50.362308  648432 main.go:141] libmachine: (addons-444608)   </devices>
	I0130 21:00:50.362318  648432 main.go:141] libmachine: (addons-444608) </domain>
	I0130 21:00:50.362334  648432 main.go:141] libmachine: (addons-444608) 
	I0130 21:00:50.367746  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:d7:4d:7e in network default
	I0130 21:00:50.368333  648432 main.go:141] libmachine: (addons-444608) Ensuring networks are active...
	I0130 21:00:50.368359  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:50.369013  648432 main.go:141] libmachine: (addons-444608) Ensuring network default is active
	I0130 21:00:50.369394  648432 main.go:141] libmachine: (addons-444608) Ensuring network mk-addons-444608 is active
	I0130 21:00:50.369917  648432 main.go:141] libmachine: (addons-444608) Getting domain xml...
	I0130 21:00:50.370663  648432 main.go:141] libmachine: (addons-444608) Creating domain...
	I0130 21:00:51.625020  648432 main.go:141] libmachine: (addons-444608) Waiting to get IP...
	I0130 21:00:51.625804  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:51.626212  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:51.626247  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:51.626181  648454 retry.go:31] will retry after 261.261021ms: waiting for machine to come up
	I0130 21:00:51.888914  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:51.889342  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:51.889380  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:51.889287  648454 retry.go:31] will retry after 249.065018ms: waiting for machine to come up
	I0130 21:00:52.139594  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:52.139984  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:52.140009  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:52.139949  648454 retry.go:31] will retry after 467.241495ms: waiting for machine to come up
	I0130 21:00:52.608722  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:52.609182  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:52.609210  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:52.609109  648454 retry.go:31] will retry after 460.079639ms: waiting for machine to come up
	I0130 21:00:53.070432  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:53.070825  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:53.070873  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:53.070757  648454 retry.go:31] will retry after 572.14337ms: waiting for machine to come up
	I0130 21:00:53.644416  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:53.644783  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:53.644818  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:53.644730  648454 retry.go:31] will retry after 943.752761ms: waiting for machine to come up
	I0130 21:00:54.590442  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:54.590786  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:54.590819  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:54.590737  648454 retry.go:31] will retry after 829.779676ms: waiting for machine to come up
	I0130 21:00:55.421857  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:55.422295  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:55.422334  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:55.422246  648454 retry.go:31] will retry after 1.396582304s: waiting for machine to come up
	I0130 21:00:56.820915  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:56.821344  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:56.821377  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:56.821286  648454 retry.go:31] will retry after 1.513845406s: waiting for machine to come up
	I0130 21:00:58.336959  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:58.337350  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:58.337374  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:58.337323  648454 retry.go:31] will retry after 1.506035249s: waiting for machine to come up
	I0130 21:00:59.844607  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:00:59.845079  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:00:59.845108  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:00:59.845009  648454 retry.go:31] will retry after 2.368844273s: waiting for machine to come up
	I0130 21:01:02.215357  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:02.215892  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:01:02.215941  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:01:02.215820  648454 retry.go:31] will retry after 3.108489413s: waiting for machine to come up
	I0130 21:01:05.325886  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:05.326215  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:01:05.326251  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:01:05.326154  648454 retry.go:31] will retry after 3.455172368s: waiting for machine to come up
	I0130 21:01:08.782596  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:08.783097  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find current IP address of domain addons-444608 in network mk-addons-444608
	I0130 21:01:08.783125  648432 main.go:141] libmachine: (addons-444608) DBG | I0130 21:01:08.783044  648454 retry.go:31] will retry after 4.31831982s: waiting for machine to come up
	I0130 21:01:13.105436  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.106009  648432 main.go:141] libmachine: (addons-444608) Found IP for machine: 192.168.39.85
	I0130 21:01:13.106036  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has current primary IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.106065  648432 main.go:141] libmachine: (addons-444608) Reserving static IP address...
	I0130 21:01:13.106400  648432 main.go:141] libmachine: (addons-444608) DBG | unable to find host DHCP lease matching {name: "addons-444608", mac: "52:54:00:ab:dd:46", ip: "192.168.39.85"} in network mk-addons-444608
	I0130 21:01:13.183341  648432 main.go:141] libmachine: (addons-444608) DBG | Getting to WaitForSSH function...
	I0130 21:01:13.183408  648432 main.go:141] libmachine: (addons-444608) Reserved static IP address: 192.168.39.85
	I0130 21:01:13.183427  648432 main.go:141] libmachine: (addons-444608) Waiting for SSH to be available...
	I0130 21:01:13.185713  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.186128  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.186164  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.186277  648432 main.go:141] libmachine: (addons-444608) DBG | Using SSH client type: external
	I0130 21:01:13.186305  648432 main.go:141] libmachine: (addons-444608) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa (-rw-------)
	I0130 21:01:13.186338  648432 main.go:141] libmachine: (addons-444608) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.85 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 21:01:13.186352  648432 main.go:141] libmachine: (addons-444608) DBG | About to run SSH command:
	I0130 21:01:13.186370  648432 main.go:141] libmachine: (addons-444608) DBG | exit 0
	I0130 21:01:13.282215  648432 main.go:141] libmachine: (addons-444608) DBG | SSH cmd err, output: <nil>: 
	I0130 21:01:13.282529  648432 main.go:141] libmachine: (addons-444608) KVM machine creation complete!
	I0130 21:01:13.282838  648432 main.go:141] libmachine: (addons-444608) Calling .GetConfigRaw
	I0130 21:01:13.283483  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:13.283687  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:13.283859  648432 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 21:01:13.283878  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:13.285001  648432 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 21:01:13.285017  648432 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 21:01:13.285027  648432 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 21:01:13.285033  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:13.286975  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.287320  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.287366  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.287528  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:13.287699  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.287847  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.288013  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:13.288207  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:13.288734  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:13.288761  648432 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 21:01:13.416943  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:01:13.416985  648432 main.go:141] libmachine: Detecting the provisioner...
	I0130 21:01:13.416995  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:13.419764  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.420146  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.420189  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.420311  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:13.420568  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.420739  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.420885  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:13.421138  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:13.421485  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:13.421505  648432 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 21:01:13.546734  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 21:01:13.546849  648432 main.go:141] libmachine: found compatible host: buildroot
	I0130 21:01:13.546868  648432 main.go:141] libmachine: Provisioning with buildroot...
	I0130 21:01:13.546881  648432 main.go:141] libmachine: (addons-444608) Calling .GetMachineName
	I0130 21:01:13.547207  648432 buildroot.go:166] provisioning hostname "addons-444608"
	I0130 21:01:13.547253  648432 main.go:141] libmachine: (addons-444608) Calling .GetMachineName
	I0130 21:01:13.547495  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:13.550206  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.550527  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.550562  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.550672  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:13.550897  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.551100  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.551261  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:13.551424  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:13.551799  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:13.551818  648432 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-444608 && echo "addons-444608" | sudo tee /etc/hostname
	I0130 21:01:13.691817  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-444608
	
	I0130 21:01:13.691855  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:13.694863  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.695180  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.695218  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.695389  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:13.695628  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.695838  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:13.695997  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:13.696180  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:13.696592  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:13.696611  648432 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-444608' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-444608/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-444608' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 21:01:13.835419  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:01:13.835497  648432 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 21:01:13.835534  648432 buildroot.go:174] setting up certificates
	I0130 21:01:13.835567  648432 provision.go:83] configureAuth start
	I0130 21:01:13.835586  648432 main.go:141] libmachine: (addons-444608) Calling .GetMachineName
	I0130 21:01:13.835943  648432 main.go:141] libmachine: (addons-444608) Calling .GetIP
	I0130 21:01:13.838536  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.838869  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.838892  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.839106  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:13.840965  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.841227  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:13.841290  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:13.841398  648432 provision.go:138] copyHostCerts
	I0130 21:01:13.841489  648432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 21:01:13.841676  648432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 21:01:13.841782  648432 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 21:01:13.841881  648432 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.addons-444608 san=[192.168.39.85 192.168.39.85 localhost 127.0.0.1 minikube addons-444608]
	I0130 21:01:14.029668  648432 provision.go:172] copyRemoteCerts
	I0130 21:01:14.029741  648432 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 21:01:14.029768  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.032416  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.032741  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.032780  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.033023  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.033255  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.033456  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.033662  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:14.127065  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 21:01:14.150805  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0130 21:01:14.174754  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 21:01:14.198498  648432 provision.go:86] duration metric: configureAuth took 362.897824ms
	I0130 21:01:14.198532  648432 buildroot.go:189] setting minikube options for container-runtime
	I0130 21:01:14.198706  648432 config.go:182] Loaded profile config "addons-444608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:01:14.198826  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.201787  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.202304  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.202344  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.202602  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.202834  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.203044  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.203192  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.203400  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:14.203730  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:14.203747  648432 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 21:01:14.534672  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 21:01:14.534701  648432 main.go:141] libmachine: Checking connection to Docker...
	I0130 21:01:14.534730  648432 main.go:141] libmachine: (addons-444608) Calling .GetURL
	I0130 21:01:14.535983  648432 main.go:141] libmachine: (addons-444608) DBG | Using libvirt version 6000000
	I0130 21:01:14.538299  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.538656  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.538691  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.538853  648432 main.go:141] libmachine: Docker is up and running!
	I0130 21:01:14.538875  648432 main.go:141] libmachine: Reticulating splines...
	I0130 21:01:14.538884  648432 client.go:171] LocalClient.Create took 25.289255124s
	I0130 21:01:14.538909  648432 start.go:167] duration metric: libmachine.API.Create for "addons-444608" took 25.289330348s
	I0130 21:01:14.538919  648432 start.go:300] post-start starting for "addons-444608" (driver="kvm2")
	I0130 21:01:14.538952  648432 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 21:01:14.538978  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:14.539286  648432 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 21:01:14.539313  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.541587  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.541878  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.541919  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.542040  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.542241  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.542420  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.542564  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:14.634954  648432 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 21:01:14.639178  648432 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 21:01:14.639205  648432 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 21:01:14.639294  648432 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 21:01:14.639317  648432 start.go:303] post-start completed in 100.392759ms
	I0130 21:01:14.639358  648432 main.go:141] libmachine: (addons-444608) Calling .GetConfigRaw
	I0130 21:01:14.639938  648432 main.go:141] libmachine: (addons-444608) Calling .GetIP
	I0130 21:01:14.642823  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.643179  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.643204  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.643472  648432 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/config.json ...
	I0130 21:01:14.643649  648432 start.go:128] duration metric: createHost completed in 25.413436482s
	I0130 21:01:14.643675  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.646055  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.646401  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.646426  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.646586  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.646858  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.647057  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.647217  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.647361  648432 main.go:141] libmachine: Using SSH client type: native
	I0130 21:01:14.647676  648432 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.85 22 <nil> <nil>}
	I0130 21:01:14.647689  648432 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 21:01:14.774554  648432 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706648474.754875292
	
	I0130 21:01:14.774581  648432 fix.go:206] guest clock: 1706648474.754875292
	I0130 21:01:14.774590  648432 fix.go:219] Guest: 2024-01-30 21:01:14.754875292 +0000 UTC Remote: 2024-01-30 21:01:14.643661849 +0000 UTC m=+25.540772230 (delta=111.213443ms)
	I0130 21:01:14.774622  648432 fix.go:190] guest clock delta is within tolerance: 111.213443ms
	I0130 21:01:14.774628  648432 start.go:83] releasing machines lock for "addons-444608", held for 25.544538892s
	I0130 21:01:14.774650  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:14.774963  648432 main.go:141] libmachine: (addons-444608) Calling .GetIP
	I0130 21:01:14.777609  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.777909  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.777948  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.778093  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:14.778799  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:14.779011  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:14.779130  648432 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 21:01:14.779175  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.779253  648432 ssh_runner.go:195] Run: cat /version.json
	I0130 21:01:14.779277  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:14.781786  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.782068  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.782205  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.782233  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.782434  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:14.782444  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.782459  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:14.782661  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.782669  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:14.782864  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.782902  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:14.783020  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:14.783028  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:14.783178  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:14.871406  648432 ssh_runner.go:195] Run: systemctl --version
	I0130 21:01:14.897557  648432 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 21:01:15.059016  648432 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 21:01:15.065365  648432 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 21:01:15.065440  648432 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 21:01:15.079608  648432 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 21:01:15.079644  648432 start.go:475] detecting cgroup driver to use...
	I0130 21:01:15.079747  648432 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 21:01:15.094739  648432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 21:01:15.108997  648432 docker.go:217] disabling cri-docker service (if available) ...
	I0130 21:01:15.109087  648432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 21:01:15.123178  648432 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 21:01:15.137110  648432 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 21:01:15.247574  648432 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 21:01:15.363086  648432 docker.go:233] disabling docker service ...
	I0130 21:01:15.363198  648432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 21:01:15.376243  648432 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 21:01:15.387813  648432 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 21:01:15.488912  648432 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 21:01:15.594145  648432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 21:01:15.606895  648432 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 21:01:15.624682  648432 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 21:01:15.624760  648432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:01:15.635352  648432 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 21:01:15.635426  648432 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:01:15.645459  648432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:01:15.655546  648432 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:01:15.666279  648432 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 21:01:15.677013  648432 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 21:01:15.686081  648432 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 21:01:15.686145  648432 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 21:01:15.700176  648432 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 21:01:15.711170  648432 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 21:01:15.835000  648432 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 21:01:16.015498  648432 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 21:01:16.015611  648432 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 21:01:16.021248  648432 start.go:543] Will wait 60s for crictl version
	I0130 21:01:16.021339  648432 ssh_runner.go:195] Run: which crictl
	I0130 21:01:16.025707  648432 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 21:01:16.066903  648432 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 21:01:16.067058  648432 ssh_runner.go:195] Run: crio --version
	I0130 21:01:16.119125  648432 ssh_runner.go:195] Run: crio --version
	I0130 21:01:16.170502  648432 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 21:01:16.172043  648432 main.go:141] libmachine: (addons-444608) Calling .GetIP
	I0130 21:01:16.174675  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:16.174942  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:16.174982  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:16.175216  648432 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 21:01:16.179516  648432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:01:16.192462  648432 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:01:16.192547  648432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:01:16.229428  648432 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 21:01:16.229540  648432 ssh_runner.go:195] Run: which lz4
	I0130 21:01:16.233678  648432 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 21:01:16.237956  648432 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 21:01:16.237988  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 21:01:18.043720  648432 crio.go:444] Took 1.810053 seconds to copy over tarball
	I0130 21:01:18.043796  648432 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 21:01:21.242851  648432 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.199022428s)
	I0130 21:01:21.242923  648432 crio.go:451] Took 3.199172 seconds to extract the tarball
	I0130 21:01:21.242937  648432 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 21:01:21.284629  648432 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:01:21.357827  648432 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 21:01:21.357859  648432 cache_images.go:84] Images are preloaded, skipping loading
	I0130 21:01:21.357960  648432 ssh_runner.go:195] Run: crio config
	I0130 21:01:21.413141  648432 cni.go:84] Creating CNI manager for ""
	I0130 21:01:21.413171  648432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:01:21.413194  648432 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 21:01:21.413225  648432 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.85 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-444608 NodeName:addons-444608 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.85"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.85 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kube
rnetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 21:01:21.413432  648432 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.85
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-444608"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.85
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.85"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 21:01:21.413610  648432 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-444608 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.85
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-444608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 21:01:21.413689  648432 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 21:01:21.424152  648432 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 21:01:21.424251  648432 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 21:01:21.433731  648432 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0130 21:01:21.450464  648432 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 21:01:21.467900  648432 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2097 bytes)
	I0130 21:01:21.485843  648432 ssh_runner.go:195] Run: grep 192.168.39.85	control-plane.minikube.internal$ /etc/hosts
	I0130 21:01:21.490288  648432 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.85	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:01:21.502863  648432 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608 for IP: 192.168.39.85
	I0130 21:01:21.502911  648432 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.503070  648432 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 21:01:21.622657  648432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt ...
	I0130 21:01:21.622697  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt: {Name:mkc1f460568a8cc585b65e1f79bf0d7b7d6f8c72 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.622883  648432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key ...
	I0130 21:01:21.622894  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key: {Name:mk840b6cb9f22c0d060a2ae8d9d712683a2f27a4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.622965  648432 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 21:01:21.751531  648432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt ...
	I0130 21:01:21.751596  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt: {Name:mk4c90d3f5ad52d22fbb687b558b2b824b7f8f85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.751846  648432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key ...
	I0130 21:01:21.751864  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key: {Name:mkbc4368f5b4ac1a1fb5e363b98276aaeab029cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.752045  648432 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.key
	I0130 21:01:21.752064  648432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt with IP's: []
	I0130 21:01:21.851749  648432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt ...
	I0130 21:01:21.851790  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: {Name:mk08d614294d33cf6eaf290aa3669a8a447a7ade Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.851991  648432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.key ...
	I0130 21:01:21.852009  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.key: {Name:mk7c1c132acf3eafd6e32258697e8def1bfb2152 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.852117  648432 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key.5fc70b3d
	I0130 21:01:21.852145  648432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt.5fc70b3d with IP's: [192.168.39.85 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 21:01:21.927030  648432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt.5fc70b3d ...
	I0130 21:01:21.927064  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt.5fc70b3d: {Name:mkf8805e45d12e1e9966d867f4985ff906d9d19c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.927256  648432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key.5fc70b3d ...
	I0130 21:01:21.927281  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key.5fc70b3d: {Name:mk12abf8baf5de20b5cd0732779a3d33f96a4916 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:21.927387  648432 certs.go:337] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt.5fc70b3d -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt
	I0130 21:01:21.927543  648432 certs.go:341] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key.5fc70b3d -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key
	I0130 21:01:21.927625  648432 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.key
	I0130 21:01:21.927650  648432 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.crt with IP's: []
	I0130 21:01:22.183510  648432 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.crt ...
	I0130 21:01:22.183554  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.crt: {Name:mk90dbf9b116be536cf57fc6208449af12155cad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:22.183751  648432 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.key ...
	I0130 21:01:22.183778  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.key: {Name:mkab8055d87f7c41d30c1a9b34322c1fca5337d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:22.183967  648432 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 21:01:22.184005  648432 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 21:01:22.184034  648432 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 21:01:22.184067  648432 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 21:01:22.184745  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 21:01:22.210331  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 21:01:22.233634  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 21:01:22.256918  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 21:01:22.281312  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 21:01:22.305113  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 21:01:22.328400  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 21:01:22.352218  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 21:01:22.375306  648432 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 21:01:22.400572  648432 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 21:01:22.416879  648432 ssh_runner.go:195] Run: openssl version
	I0130 21:01:22.422582  648432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 21:01:22.433303  648432 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:01:22.438058  648432 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:01:22.438127  648432 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:01:22.443541  648432 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 21:01:22.454607  648432 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 21:01:22.459042  648432 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:01:22.459103  648432 kubeadm.go:404] StartCluster: {Name:addons-444608 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:addons-444608 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:01:22.459188  648432 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 21:01:22.459236  648432 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 21:01:22.499202  648432 cri.go:89] found id: ""
	I0130 21:01:22.499284  648432 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 21:01:22.509266  648432 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 21:01:22.519036  648432 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 21:01:22.529165  648432 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 21:01:22.529218  648432 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 21:01:22.720555  648432 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 21:01:35.912706  648432 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 21:01:35.912759  648432 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 21:01:35.912848  648432 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 21:01:35.912971  648432 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 21:01:35.913054  648432 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 21:01:35.913140  648432 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 21:01:35.914871  648432 out.go:204]   - Generating certificates and keys ...
	I0130 21:01:35.914961  648432 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 21:01:35.915040  648432 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 21:01:35.915114  648432 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 21:01:35.915187  648432 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 21:01:35.915253  648432 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 21:01:35.915321  648432 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 21:01:35.915391  648432 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 21:01:35.915519  648432 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-444608 localhost] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0130 21:01:35.915575  648432 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 21:01:35.915690  648432 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-444608 localhost] and IPs [192.168.39.85 127.0.0.1 ::1]
	I0130 21:01:35.915815  648432 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 21:01:35.915908  648432 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 21:01:35.915951  648432 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 21:01:35.916007  648432 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 21:01:35.916071  648432 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 21:01:35.916155  648432 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 21:01:35.916210  648432 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 21:01:35.916259  648432 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 21:01:35.916324  648432 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 21:01:35.916418  648432 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 21:01:35.918190  648432 out.go:204]   - Booting up control plane ...
	I0130 21:01:35.918277  648432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 21:01:35.918370  648432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 21:01:35.918441  648432 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 21:01:35.918533  648432 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 21:01:35.918659  648432 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 21:01:35.918710  648432 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 21:01:35.918832  648432 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 21:01:35.918900  648432 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.008067 seconds
	I0130 21:01:35.919008  648432 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 21:01:35.919131  648432 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 21:01:35.919218  648432 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 21:01:35.919404  648432 kubeadm.go:322] [mark-control-plane] Marking the node addons-444608 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 21:01:35.919479  648432 kubeadm.go:322] [bootstrap-token] Using token: 6ckkiz.z3271h2flzia3806
	I0130 21:01:35.921007  648432 out.go:204]   - Configuring RBAC rules ...
	I0130 21:01:35.921129  648432 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 21:01:35.921211  648432 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 21:01:35.921348  648432 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 21:01:35.921512  648432 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 21:01:35.921639  648432 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 21:01:35.921756  648432 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 21:01:35.921896  648432 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 21:01:35.921965  648432 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 21:01:35.922034  648432 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 21:01:35.922043  648432 kubeadm.go:322] 
	I0130 21:01:35.922115  648432 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 21:01:35.922124  648432 kubeadm.go:322] 
	I0130 21:01:35.922220  648432 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 21:01:35.922229  648432 kubeadm.go:322] 
	I0130 21:01:35.922279  648432 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 21:01:35.922371  648432 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 21:01:35.922435  648432 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 21:01:35.922447  648432 kubeadm.go:322] 
	I0130 21:01:35.922524  648432 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 21:01:35.922539  648432 kubeadm.go:322] 
	I0130 21:01:35.922633  648432 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 21:01:35.922646  648432 kubeadm.go:322] 
	I0130 21:01:35.922698  648432 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 21:01:35.922764  648432 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 21:01:35.922820  648432 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 21:01:35.922828  648432 kubeadm.go:322] 
	I0130 21:01:35.922893  648432 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 21:01:35.922962  648432 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 21:01:35.922968  648432 kubeadm.go:322] 
	I0130 21:01:35.923040  648432 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 6ckkiz.z3271h2flzia3806 \
	I0130 21:01:35.923125  648432 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 21:01:35.923144  648432 kubeadm.go:322] 	--control-plane 
	I0130 21:01:35.923150  648432 kubeadm.go:322] 
	I0130 21:01:35.923272  648432 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 21:01:35.923296  648432 kubeadm.go:322] 
	I0130 21:01:35.923416  648432 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 6ckkiz.z3271h2flzia3806 \
	I0130 21:01:35.923573  648432 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 21:01:35.923594  648432 cni.go:84] Creating CNI manager for ""
	I0130 21:01:35.923605  648432 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:01:35.925457  648432 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 21:01:35.926817  648432 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 21:01:35.969492  648432 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 21:01:36.030164  648432 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 21:01:36.030282  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:36.030281  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=addons-444608 minikube.k8s.io/updated_at=2024_01_30T21_01_36_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:36.232709  648432 ops.go:34] apiserver oom_adj: -16
	I0130 21:01:36.232938  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:36.733550  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:37.232959  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:37.733057  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:38.233015  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:38.733505  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:39.233292  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:39.733865  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:40.233666  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:40.733376  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:41.233259  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:41.733137  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:42.233327  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:42.733781  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:43.233183  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:43.733373  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:44.233639  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:44.732968  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:45.232979  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:45.733772  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:46.233788  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:46.733068  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:47.233777  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:47.733876  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:48.233324  648432 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:01:48.373144  648432 kubeadm.go:1088] duration metric: took 12.342926509s to wait for elevateKubeSystemPrivileges.
	I0130 21:01:48.373185  648432 kubeadm.go:406] StartCluster complete in 25.914098602s
	I0130 21:01:48.373211  648432 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:48.373353  648432 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:01:48.373948  648432 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:01:48.374201  648432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 21:01:48.374287  648432 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0130 21:01:48.374458  648432 config.go:182] Loaded profile config "addons-444608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:01:48.374506  648432 addons.go:69] Setting inspektor-gadget=true in profile "addons-444608"
	I0130 21:01:48.374496  648432 addons.go:69] Setting ingress=true in profile "addons-444608"
	I0130 21:01:48.374522  648432 addons.go:69] Setting gcp-auth=true in profile "addons-444608"
	I0130 21:01:48.374513  648432 addons.go:69] Setting metrics-server=true in profile "addons-444608"
	I0130 21:01:48.374545  648432 mustload.go:65] Loading cluster: addons-444608
	I0130 21:01:48.374545  648432 addons.go:69] Setting helm-tiller=true in profile "addons-444608"
	I0130 21:01:48.374557  648432 addons.go:234] Setting addon helm-tiller=true in "addons-444608"
	I0130 21:01:48.374679  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.374460  648432 addons.go:69] Setting cloud-spanner=true in profile "addons-444608"
	I0130 21:01:48.374728  648432 addons.go:234] Setting addon cloud-spanner=true in "addons-444608"
	I0130 21:01:48.374765  648432 config.go:182] Loaded profile config "addons-444608": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:01:48.374510  648432 addons.go:69] Setting volumesnapshots=true in profile "addons-444608"
	I0130 21:01:48.374817  648432 addons.go:234] Setting addon volumesnapshots=true in "addons-444608"
	I0130 21:01:48.374558  648432 addons.go:234] Setting addon inspektor-gadget=true in "addons-444608"
	I0130 21:01:48.374868  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.374897  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.374467  648432 addons.go:69] Setting default-storageclass=true in profile "addons-444608"
	I0130 21:01:48.374962  648432 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-444608"
	I0130 21:01:48.374482  648432 addons.go:69] Setting registry=true in profile "addons-444608"
	I0130 21:01:48.375048  648432 addons.go:234] Setting addon registry=true in "addons-444608"
	I0130 21:01:48.374485  648432 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-444608"
	I0130 21:01:48.375140  648432 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-444608"
	I0130 21:01:48.375163  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.375181  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.375208  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.375273  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.375298  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.375311  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.375343  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.374496  648432 addons.go:69] Setting ingress-dns=true in profile "addons-444608"
	I0130 21:01:48.375440  648432 addons.go:234] Setting addon ingress-dns=true in "addons-444608"
	I0130 21:01:48.374498  648432 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-444608"
	I0130 21:01:48.375466  648432 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-444608"
	I0130 21:01:48.374461  648432 addons.go:69] Setting yakd=true in profile "addons-444608"
	I0130 21:01:48.375486  648432 addons.go:234] Setting addon yakd=true in "addons-444608"
	I0130 21:01:48.374575  648432 addons.go:234] Setting addon metrics-server=true in "addons-444608"
	I0130 21:01:48.374474  648432 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-444608"
	I0130 21:01:48.375509  648432 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-444608"
	I0130 21:01:48.374490  648432 addons.go:69] Setting storage-provisioner=true in profile "addons-444608"
	I0130 21:01:48.375532  648432 addons.go:234] Setting addon storage-provisioner=true in "addons-444608"
	I0130 21:01:48.374789  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.374607  648432 addons.go:234] Setting addon ingress=true in "addons-444608"
	I0130 21:01:48.375774  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.375861  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.375871  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.375889  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.375935  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.375897  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.375958  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376116  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376137  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376229  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376259  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376264  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376283  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376309  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.376329  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376336  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376348  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376376  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376559  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.376598  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.376821  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.376917  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.376937  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.376998  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.377056  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.377121  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.377142  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.377504  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.377579  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.396083  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39399
	I0130 21:01:48.396454  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41985
	I0130 21:01:48.396548  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39221
	I0130 21:01:48.396570  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45435
	I0130 21:01:48.397024  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.397148  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.397509  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.397526  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.397656  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.397672  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.397701  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39917
	I0130 21:01:48.398120  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.398258  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.398354  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.398400  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.398449  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.398754  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.398780  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.399006  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.399028  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.399095  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.399267  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.399465  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.400093  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.400134  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.401062  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.406145  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.406185  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.406588  648432 addons.go:234] Setting addon default-storageclass=true in "addons-444608"
	I0130 21:01:48.406635  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.407038  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.407086  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.407323  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.407372  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.409817  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.409883  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.410373  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.410849  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.410923  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.413309  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.413398  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36331
	I0130 21:01:48.413575  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40417
	I0130 21:01:48.414456  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.414498  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.415143  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.415252  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.415757  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.415776  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.415919  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.415931  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.416411  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.416966  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.417001  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.417226  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.417832  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.417860  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.427244  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43529
	I0130 21:01:48.427857  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.428394  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.428415  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.428856  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.429044  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.431061  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.433500  648432 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.3
	I0130 21:01:48.435116  648432 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 21:01:48.435136  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0130 21:01:48.435179  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.438816  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.439228  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.439253  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.439539  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.439769  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.439963  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.440147  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.443611  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34911
	I0130 21:01:48.443805  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36659
	I0130 21:01:48.444351  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33657
	I0130 21:01:48.444915  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.445604  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.445622  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.446012  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.446109  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.446785  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.446829  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.447151  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.447168  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.447608  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.448253  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.448293  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.448863  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42053
	I0130 21:01:48.449405  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.449959  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.449992  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.450065  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.450385  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42651
	I0130 21:01:48.450574  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.450580  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37113
	I0130 21:01:48.450993  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.451112  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.451142  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.451185  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.451550  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.451569  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.451691  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.451704  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.452038  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.452567  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.452609  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.452623  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42579
	I0130 21:01:48.452697  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.452717  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.452850  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.453204  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.453803  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.453844  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.454154  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.456104  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.456885  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.456911  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.457492  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.458163  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.458198  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.458587  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42093
	I0130 21:01:48.459044  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.459369  648432 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-444608"
	I0130 21:01:48.459422  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:48.459514  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.459535  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.459833  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.459891  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.459910  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.460409  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.460440  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.462288  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36319
	I0130 21:01:48.462685  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.463166  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.463193  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.463517  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.463660  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.469170  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33645
	I0130 21:01:48.470155  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.470845  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.470872  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.473061  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37411
	I0130 21:01:48.473645  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.473732  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.474229  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.474251  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.474386  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.474468  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.474742  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.475048  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.477831  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I0130 21:01:48.478228  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.478668  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.478696  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.479224  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.479464  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.479555  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.481729  648432 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.13
	I0130 21:01:48.483267  648432 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0130 21:01:48.483295  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0130 21:01:48.483317  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.481270  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.482494  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36383
	I0130 21:01:48.485632  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0130 21:01:48.484472  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.486200  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39745
	I0130 21:01:48.487202  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.489026  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0130 21:01:48.487818  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.487929  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.487970  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.488138  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.490337  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40943
	I0130 21:01:48.491803  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0130 21:01:48.490643  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.490681  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.490878  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.491203  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.493100  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.494456  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0130 21:01:48.493342  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.493537  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.493565  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.493642  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.497506  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0130 21:01:48.496763  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41283
	I0130 21:01:48.496769  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0130 21:01:48.496774  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42631
	I0130 21:01:48.496774  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41873
	I0130 21:01:48.496782  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.496789  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.496782  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.497279  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.498280  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
	I0130 21:01:48.499276  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0130 21:01:48.499300  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.500913  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.500966  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.500994  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.501512  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0130 21:01:48.501112  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.503131  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0130 21:01:48.501926  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.501928  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.501966  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.502112  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.502326  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.502389  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.502441  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.502611  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.502655  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37437
	I0130 21:01:48.504448  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.504507  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.504563  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.504564  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0130 21:01:48.504574  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0130 21:01:48.504587  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.504593  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.504974  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.506544  648432 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:01:48.508053  648432 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.24.0
	I0130 21:01:48.505299  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.505306  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.505773  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.505857  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.505906  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.506099  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.506448  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.508251  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.509053  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.510025  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.510026  648432 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0130 21:01:48.510285  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.510347  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.510541  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.511326  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.511435  648432 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:01:48.511444  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 21:01:48.511459  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.511502  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.511525  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.511588  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0130 21:01:48.511601  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.510578  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.512840  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.512911  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.512955  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.513154  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.513238  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.514980  648432 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0130 21:01:48.513416  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.514220  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.514677  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.514714  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.515293  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.516088  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.513401  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.517059  648432 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 21:01:48.517072  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 21:01:48.517089  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.517218  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.519811  648432 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0130 21:01:48.517586  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.517663  648432 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 21:01:48.517743  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.517945  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.519012  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.519285  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41393
	I0130 21:01:48.519646  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.520714  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.521114  648432 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0130 21:01:48.521258  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.521334  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43315
	I0130 21:01:48.522310  648432 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0130 21:01:48.522329  648432 out.go:177]   - Using image docker.io/registry:2.8.3
	I0130 21:01:48.522341  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 21:01:48.522645  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.523889  648432 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.9.5
	I0130 21:01:48.523926  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0130 21:01:48.525559  648432 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 21:01:48.523955  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.523974  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.524044  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.524048  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.524061  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.524334  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.524335  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.524352  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.524557  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.524776  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.525606  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.527088  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0130 21:01:48.527094  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.528390  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.528414  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.528419  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.529146  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.529632  648432 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0130 21:01:48.529793  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.529888  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.530125  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.530224  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.530546  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.530746  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0130 21:01:48.532269  648432 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0130 21:01:48.532380  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.532399  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.532455  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.532467  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.532542  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.532559  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.532622  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.532753  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.532804  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.533996  648432 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 21:01:48.534025  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.535103  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.535166  648432 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0130 21:01:48.536560  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0130 21:01:48.536571  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.536584  648432 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0130 21:01:48.536597  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0130 21:01:48.535171  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.538020  648432 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 21:01:48.538033  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16103 bytes)
	I0130 21:01:48.538045  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.535400  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.535736  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.535824  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.536290  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.536614  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.538293  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.538862  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.539118  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:48.539164  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:48.539897  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.539957  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.540586  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.540630  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.541015  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.541298  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.541526  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.541753  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.542271  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.544315  648432 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0130 21:01:48.543079  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.543671  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.544067  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.544482  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.545370  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.545590  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.545416  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.545614  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.545671  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.545691  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.545710  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.545726  648432 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 21:01:48.545727  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.545737  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0130 21:01:48.545751  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.546313  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.546335  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.546313  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.546493  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.546551  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.546597  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.546801  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.546823  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.547445  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.549233  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.549651  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.549782  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.549785  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.549992  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.550132  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.550272  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.564486  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0130 21:01:48.565000  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:48.565577  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:48.565601  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:48.566020  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:48.566217  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:48.567909  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:48.569699  648432 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0130 21:01:48.571288  648432 out.go:177]   - Using image docker.io/busybox:stable
	I0130 21:01:48.572916  648432 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 21:01:48.572943  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0130 21:01:48.572968  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:48.576294  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.576728  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:48.576769  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:48.576942  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:48.577153  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:48.577344  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:48.577513  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:48.952257  648432 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-444608" context rescaled to 1 replicas
	I0130 21:01:48.952303  648432 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.85 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:01:48.954311  648432 out.go:177] * Verifying Kubernetes components...
	I0130 21:01:48.955840  648432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:01:48.975888  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 21:01:48.976382  648432 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 21:01:48.976409  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0130 21:01:48.997311  648432 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0130 21:01:48.997337  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0130 21:01:49.026119  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0130 21:01:49.050039  648432 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0130 21:01:49.050069  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0130 21:01:49.058245  648432 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 21:01:49.110367  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0130 21:01:49.117905  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0130 21:01:49.117930  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0130 21:01:49.130276  648432 node_ready.go:35] waiting up to 6m0s for node "addons-444608" to be "Ready" ...
	I0130 21:01:49.155018  648432 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0130 21:01:49.155049  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0130 21:01:49.158739  648432 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0130 21:01:49.160247  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0130 21:01:49.163067  648432 node_ready.go:49] node "addons-444608" has status "Ready":"True"
	I0130 21:01:49.163099  648432 node_ready.go:38] duration metric: took 32.791293ms waiting for node "addons-444608" to be "Ready" ...
	I0130 21:01:49.163113  648432 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:01:49.188466  648432 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace to be "Ready" ...
	I0130 21:01:49.258104  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:01:49.269175  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0130 21:01:49.287119  648432 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 21:01:49.287149  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 21:01:49.344197  648432 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0130 21:01:49.344237  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0130 21:01:49.350292  648432 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 21:01:49.350319  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0130 21:01:49.353799  648432 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0130 21:01:49.353818  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0130 21:01:49.358796  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0130 21:01:49.358828  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0130 21:01:49.360476  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0130 21:01:49.380021  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0130 21:01:49.408832  648432 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0130 21:01:49.408880  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0130 21:01:49.416746  648432 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0130 21:01:49.416785  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0130 21:01:49.586030  648432 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 21:01:49.586064  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 21:01:49.599388  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0130 21:01:49.631283  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0130 21:01:49.631322  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0130 21:01:49.636861  648432 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0130 21:01:49.636903  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0130 21:01:49.652992  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0130 21:01:49.654931  648432 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0130 21:01:49.654964  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0130 21:01:49.682274  648432 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0130 21:01:49.682307  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0130 21:01:49.718006  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 21:01:49.750823  648432 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0130 21:01:49.750873  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0130 21:01:49.762002  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0130 21:01:49.762028  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0130 21:01:49.811235  648432 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0130 21:01:49.811265  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0130 21:01:49.856559  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0130 21:01:49.856599  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0130 21:01:49.879474  648432 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0130 21:01:49.879502  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0130 21:01:49.881594  648432 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0130 21:01:49.881617  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0130 21:01:49.948612  648432 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0130 21:01:49.948647  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0130 21:01:49.978702  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0130 21:01:49.978737  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0130 21:01:50.018971  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0130 21:01:50.029640  648432 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0130 21:01:50.029672  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0130 21:01:50.056032  648432 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 21:01:50.056060  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0130 21:01:50.131537  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 21:01:50.148147  648432 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 21:01:50.148181  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0130 21:01:50.148554  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0130 21:01:50.148578  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0130 21:01:50.232121  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0130 21:01:50.232164  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0130 21:01:50.240739  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0130 21:01:50.313984  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0130 21:01:50.314024  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0130 21:01:50.382883  648432 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 21:01:50.382910  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0130 21:01:50.436123  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0130 21:01:52.691564  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:01:52.780031  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.804091876s)
	I0130 21:01:52.780106  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:52.780119  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:52.780453  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:52.780488  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:52.780503  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:52.780519  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:52.780819  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:52.780835  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:52.780842  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:53.471270  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:53.471301  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:53.471559  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:53.471585  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:53.471618  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:55.009182  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:01:55.372243  648432 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (6.313951867s)
	I0130 21:01:55.372285  648432 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 21:01:55.373267  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (6.347107762s)
	I0130 21:01:55.373335  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:55.373355  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:55.373768  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:55.373786  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:55.373791  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:55.373797  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:55.373810  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:55.374081  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:55.374114  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.014366  648432 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0130 21:01:56.014411  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:56.017845  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:56.018365  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:56.018405  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:56.018606  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:56.018841  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:56.019056  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:56.019218  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:56.222847  648432 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0130 21:01:56.243192  648432 addons.go:234] Setting addon gcp-auth=true in "addons-444608"
	I0130 21:01:56.243268  648432 host.go:66] Checking if "addons-444608" exists ...
	I0130 21:01:56.243711  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:56.243762  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:56.259860  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34139
	I0130 21:01:56.260496  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:56.261053  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:56.261081  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:56.261430  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:56.261905  648432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:01:56.261966  648432 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:01:56.277022  648432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0130 21:01:56.277555  648432 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:01:56.278153  648432 main.go:141] libmachine: Using API Version  1
	I0130 21:01:56.278182  648432 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:01:56.278593  648432 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:01:56.278796  648432 main.go:141] libmachine: (addons-444608) Calling .GetState
	I0130 21:01:56.280578  648432 main.go:141] libmachine: (addons-444608) Calling .DriverName
	I0130 21:01:56.280850  648432 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0130 21:01:56.280882  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHHostname
	I0130 21:01:56.283772  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:56.284244  648432 main.go:141] libmachine: (addons-444608) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ab:dd:46", ip: ""} in network mk-addons-444608: {Iface:virbr1 ExpiryTime:2024-01-30 22:01:06 +0000 UTC Type:0 Mac:52:54:00:ab:dd:46 Iaid: IPaddr:192.168.39.85 Prefix:24 Hostname:addons-444608 Clientid:01:52:54:00:ab:dd:46}
	I0130 21:01:56.284275  648432 main.go:141] libmachine: (addons-444608) DBG | domain addons-444608 has defined IP address 192.168.39.85 and MAC address 52:54:00:ab:dd:46 in network mk-addons-444608
	I0130 21:01:56.284449  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHPort
	I0130 21:01:56.284648  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHKeyPath
	I0130 21:01:56.284812  648432 main.go:141] libmachine: (addons-444608) Calling .GetSSHUsername
	I0130 21:01:56.284980  648432 sshutil.go:53] new ssh client: &{IP:192.168.39.85 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/addons-444608/id_rsa Username:docker}
	I0130 21:01:56.545949  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.435528797s)
	I0130 21:01:56.546034  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.546049  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.546506  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:56.546595  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.546613  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.546631  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.546645  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.547036  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.547056  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.623037  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.364877175s)
	I0130 21:01:56.623104  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.353889418s)
	I0130 21:01:56.623151  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.623173  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.623112  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.623216  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.623481  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.623508  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.623518  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.623517  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:56.623527  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.623591  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.623603  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.623607  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:56.623612  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.623674  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.623770  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.623785  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.623841  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:56.623898  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.623911  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:56.703148  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:56.703179  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:56.703620  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:56.703648  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:57.337290  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:01:58.060929  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.700411744s)
	I0130 21:01:58.060989  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061006  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061027  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.461607135s)
	I0130 21:01:58.061056  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061071  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.060991  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.680942866s)
	I0130 21:01:58.061204  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.343160188s)
	I0130 21:01:58.061215  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061225  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061258  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061122  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.408074187s)
	I0130 21:01:58.061282  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061290  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.04227663s)
	I0130 21:01:58.061299  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061315  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061258  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061407  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.92983514s)
	W0130 21:01:58.061437  648432 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 21:01:58.061505  648432 retry.go:31] will retry after 303.823004ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0130 21:01:58.061594  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.820821447s)
	I0130 21:01:58.061722  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.061757  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061821  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.061949  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.061975  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062006  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062015  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062023  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062031  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062059  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062081  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062084  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062096  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062107  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062118  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062141  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062108  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062157  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062166  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062167  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062174  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062178  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062184  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062187  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062200  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062206  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062208  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062215  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062224  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062233  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062158  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062267  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062276  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.062313  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.062812  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.062828  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.062838  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:58.062847  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:58.063684  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.063742  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.063760  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.063911  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.063934  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.063943  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.063951  648432 addons.go:470] Verifying addon registry=true in "addons-444608"
	I0130 21:01:58.066356  648432 out.go:177] * Verifying registry addon...
	I0130 21:01:58.064544  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.064561  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.064577  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.064616  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.064641  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.064660  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.064660  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.064686  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.064696  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:58.064715  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:58.067848  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.067857  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.067864  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.067881  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.067880  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:58.067892  648432 addons.go:470] Verifying addon ingress=true in "addons-444608"
	I0130 21:01:58.067869  648432 addons.go:470] Verifying addon metrics-server=true in "addons-444608"
	I0130 21:01:58.069440  648432 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-444608 service yakd-dashboard -n yakd-dashboard
	
	I0130 21:01:58.068842  648432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0130 21:01:58.071644  648432 out.go:177] * Verifying ingress addon...
	I0130 21:01:58.073868  648432 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0130 21:01:58.094727  648432 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0130 21:01:58.094751  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:01:58.105741  648432 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0130 21:01:58.105764  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:01:58.366117  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0130 21:01:58.596246  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:01:58.627402  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:01:59.124875  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.68869146s)
	I0130 21:01:59.124938  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:59.124952  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:59.124970  648432 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.844096858s)
	I0130 21:01:59.126912  648432 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20231011-8b53cabe0
	I0130 21:01:59.125370  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:59.125401  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:01:59.128449  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:59.128488  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:01:59.129726  648432 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0130 21:01:59.128503  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:01:59.131362  648432 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0130 21:01:59.131384  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0130 21:01:59.131623  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:01:59.131667  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:01:59.131691  648432 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-444608"
	I0130 21:01:59.133317  648432 out.go:177] * Verifying csi-hostpath-driver addon...
	I0130 21:01:59.135624  648432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0130 21:01:59.156216  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:01:59.156408  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:01:59.167047  648432 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0130 21:01:59.167075  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0130 21:01:59.208596  648432 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 21:01:59.208621  648432 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5432 bytes)
	I0130 21:01:59.214338  648432 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0130 21:01:59.214366  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:01:59.253837  648432 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0130 21:01:59.342582  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:01:59.577361  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:01:59.580555  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:01:59.643240  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:00.143402  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:00.148904  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:00.205604  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:00.602900  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:00.613971  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:00.670747  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:01.089981  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:01.090011  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:01.151082  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:01.621541  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:01.622695  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:01.660217  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:01.704098  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:01.895451  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.529265731s)
	I0130 21:02:01.895515  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:02:01.895530  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:02:01.895565  648432 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.641685366s)
	I0130 21:02:01.895631  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:02:01.895645  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:02:01.895921  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:02:01.895974  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:02:01.895990  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:02:01.896000  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:02:01.896100  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:02:01.896123  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:02:01.896134  648432 main.go:141] libmachine: Making call to close driver server
	I0130 21:02:01.896105  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:02:01.896148  648432 main.go:141] libmachine: (addons-444608) Calling .Close
	I0130 21:02:01.896230  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:02:01.896307  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:02:01.896264  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:02:01.898353  648432 main.go:141] libmachine: (addons-444608) DBG | Closing plugin on server side
	I0130 21:02:01.898356  648432 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:02:01.898380  648432 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:02:01.899624  648432 addons.go:470] Verifying addon gcp-auth=true in "addons-444608"
	I0130 21:02:01.901565  648432 out.go:177] * Verifying gcp-auth addon...
	I0130 21:02:01.903467  648432 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0130 21:02:01.906941  648432 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0130 21:02:01.906957  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:02.079788  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:02.081129  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:02.145103  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:02.409996  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:02.579373  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:02.581153  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:02.646422  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:02.907821  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:03.098955  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:03.099051  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:03.179675  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:03.410107  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:03.577758  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:03.579729  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:03.642170  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:03.712090  648432 pod_ready.go:102] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:03.912931  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:04.079122  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:04.085128  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:04.143750  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:04.418239  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:04.578537  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:04.585551  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:04.643388  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:04.702536  648432 pod_ready.go:97] pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.85 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-01-30 21:01:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerSt
ateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-01-30 21:01:53 +0000 UTC,FinishedAt:2024-01-30 21:02:04 +0000 UTC,ContainerID:cri-o://27411ec51e2c21072949942cb831a264d030563e4b99f69c0e4530865960b3a0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://27411ec51e2c21072949942cb831a264d030563e4b99f69c0e4530865960b3a0 Started:0xc000121d90 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0130 21:02:04.702580  648432 pod_ready.go:81] duration metric: took 15.514073028s waiting for pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace to be "Ready" ...
	E0130 21:02:04.702591  648432 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-df4bx" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-01-30 21:01:48 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.85 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-01-30 21:01:48 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runnin
g:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-01-30 21:01:53 +0000 UTC,FinishedAt:2024-01-30 21:02:04 +0000 UTC,ContainerID:cri-o://27411ec51e2c21072949942cb831a264d030563e4b99f69c0e4530865960b3a0,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e ContainerID:cri-o://27411ec51e2c21072949942cb831a264d030563e4b99f69c0e4530865960b3a0 Started:0xc000121d90 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0130 21:02:04.702598  648432 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:04.909623  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:05.083379  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:05.088161  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:05.142476  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:05.415438  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:05.606094  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:05.618421  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:05.642239  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:05.922165  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:06.084799  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:06.085110  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:06.144691  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:06.418881  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:06.590332  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:06.591258  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:06.642928  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:06.724259  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:06.919829  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:07.091146  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:07.093431  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:07.141356  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:07.408165  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:07.577208  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:07.581402  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:07.641891  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:07.908534  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:08.077174  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:08.083470  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:08.141428  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:08.407916  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:08.897960  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:08.899094  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:08.913358  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:08.918607  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:08.920202  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:09.226978  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:09.233906  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:09.236448  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:09.426310  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:09.586567  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:09.588635  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:09.641957  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:09.911899  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:10.077386  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:10.082182  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:10.141971  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:10.418205  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:10.586252  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:10.586523  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:10.662806  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:10.916711  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:11.109821  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:11.110404  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:11.142407  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:11.215660  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:11.424784  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:11.583437  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:11.583663  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:11.651535  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:11.908528  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:12.078300  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:12.080039  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:12.152735  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:12.410973  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:12.576342  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:12.580773  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:12.647129  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:12.916977  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:13.081346  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:13.081490  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:13.143031  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:13.407426  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:13.582759  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:13.582890  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:13.663464  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:13.957698  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:13.962178  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:14.081182  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:14.081540  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:14.158292  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:14.416503  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:14.590263  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:14.591718  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:14.643169  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:14.910999  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:15.083392  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:15.090124  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:15.142485  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:15.407552  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:15.577295  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:15.580605  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:15.645764  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:15.907559  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:16.081483  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:16.084690  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:16.141785  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:16.216955  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:16.413377  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:16.578039  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:16.580405  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:16.646220  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:16.908826  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:17.077981  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:17.079318  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:17.143557  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:17.587767  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:17.600193  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:17.600870  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:17.642142  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:17.908523  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:18.078956  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:18.081296  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:18.141208  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:18.228901  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:18.408968  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:18.577828  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:18.581059  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:18.642069  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:18.907968  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:19.092207  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:19.092965  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:19.142015  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:19.414832  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:19.582473  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:19.583521  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:19.643407  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:19.908774  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:20.082464  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:20.085008  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:20.142974  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:20.407923  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:20.577080  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:20.580489  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:20.643993  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:20.712547  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:20.909565  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:21.079229  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:21.082726  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:21.142033  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:21.407981  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:21.577933  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:21.580234  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:21.645346  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:21.912363  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:22.077694  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:22.079192  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:22.142905  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:22.407557  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:22.578742  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:22.579810  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:22.643112  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:22.908471  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:23.078068  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:23.083046  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:23.143164  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:23.211409  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:23.408200  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:23.577969  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:23.579575  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:23.653866  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:23.910319  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:24.077900  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:24.080695  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:24.144000  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:24.408709  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:24.579101  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:24.581630  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:24.642142  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:24.907753  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:25.080832  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:25.081093  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:25.142675  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:25.408161  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:25.578055  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:25.579601  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:25.642708  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:25.710988  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:25.907761  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:26.077644  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:26.079778  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:26.142931  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:26.408336  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:26.578651  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:26.578927  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:26.642747  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:26.910152  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:27.080115  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:27.081041  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:27.142832  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:27.407482  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:27.577893  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:27.579720  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:27.644896  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:27.713054  648432 pod_ready.go:102] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"False"
	I0130 21:02:27.911743  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:28.078088  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:28.080679  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:28.473733  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:28.474600  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:28.603830  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:28.618776  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:28.645708  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:28.724710  648432 pod_ready.go:92] pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:28.724741  648432 pod_ready.go:81] duration metric: took 24.022135139s waiting for pod "coredns-5dd5756b68-tmkmx" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.724756  648432 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.734128  648432 pod_ready.go:92] pod "etcd-addons-444608" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:28.734155  648432 pod_ready.go:81] duration metric: took 9.389192ms waiting for pod "etcd-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.734169  648432 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.741500  648432 pod_ready.go:92] pod "kube-apiserver-addons-444608" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:28.741528  648432 pod_ready.go:81] duration metric: took 7.349907ms waiting for pod "kube-apiserver-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.741540  648432 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.748801  648432 pod_ready.go:92] pod "kube-controller-manager-addons-444608" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:28.748825  648432 pod_ready.go:81] duration metric: took 7.276772ms waiting for pod "kube-controller-manager-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.748837  648432 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-gwwh9" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.754000  648432 pod_ready.go:92] pod "kube-proxy-gwwh9" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:28.754020  648432 pod_ready.go:81] duration metric: took 5.176675ms waiting for pod "kube-proxy-gwwh9" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.754028  648432 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:28.909454  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:29.077409  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:29.078963  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:29.121618  648432 pod_ready.go:92] pod "kube-scheduler-addons-444608" in "kube-system" namespace has status "Ready":"True"
	I0130 21:02:29.121643  648432 pod_ready.go:81] duration metric: took 367.609032ms waiting for pod "kube-scheduler-addons-444608" in "kube-system" namespace to be "Ready" ...
	I0130 21:02:29.121652  648432 pod_ready.go:38] duration metric: took 39.958524326s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:02:29.121671  648432 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:02:29.121727  648432 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:02:29.147850  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:29.169348  648432 api_server.go:72] duration metric: took 40.217012725s to wait for apiserver process to appear ...
	I0130 21:02:29.169378  648432 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:02:29.169400  648432 api_server.go:253] Checking apiserver healthz at https://192.168.39.85:8443/healthz ...
	I0130 21:02:29.174572  648432 api_server.go:279] https://192.168.39.85:8443/healthz returned 200:
	ok
	I0130 21:02:29.176053  648432 api_server.go:141] control plane version: v1.28.4
	I0130 21:02:29.176087  648432 api_server.go:131] duration metric: took 6.700994ms to wait for apiserver health ...
	I0130 21:02:29.176101  648432 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:02:29.316256  648432 system_pods.go:59] 18 kube-system pods found
	I0130 21:02:29.316289  648432 system_pods.go:61] "coredns-5dd5756b68-tmkmx" [d1ca7975-cdc8-49df-a84d-a508a524e812] Running
	I0130 21:02:29.316296  648432 system_pods.go:61] "csi-hostpath-attacher-0" [dda6d8f4-fce1-4d41-97b1-de66bbb42a39] Running
	I0130 21:02:29.316304  648432 system_pods.go:61] "csi-hostpath-resizer-0" [ae351c66-a681-48bd-b865-064194362eb0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0130 21:02:29.316310  648432 system_pods.go:61] "csi-hostpathplugin-jlxzp" [25d15045-eb02-482b-8fef-4aa31e90d93d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0130 21:02:29.316319  648432 system_pods.go:61] "etcd-addons-444608" [e5285df6-d128-48c8-8793-76e3eff8ed47] Running
	I0130 21:02:29.316336  648432 system_pods.go:61] "kube-apiserver-addons-444608" [771b47bb-45b2-495b-b620-51ba6440ba37] Running
	I0130 21:02:29.316342  648432 system_pods.go:61] "kube-controller-manager-addons-444608" [7f76f9e9-076e-496e-8288-23f6d33d09c1] Running
	I0130 21:02:29.316358  648432 system_pods.go:61] "kube-ingress-dns-minikube" [1569e976-2932-44de-849e-d7c8d97d191f] Running
	I0130 21:02:29.316366  648432 system_pods.go:61] "kube-proxy-gwwh9" [a1e3cb19-0b4e-4f26-bcb7-42fe4679950d] Running
	I0130 21:02:29.316375  648432 system_pods.go:61] "kube-scheduler-addons-444608" [6de01537-185b-45bf-9d72-9281347a809a] Running
	I0130 21:02:29.316389  648432 system_pods.go:61] "metrics-server-7c66d45ddc-hjdhk" [0fffac63-471a-4ee4-bb41-90cc5a14c096] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 21:02:29.316405  648432 system_pods.go:61] "nvidia-device-plugin-daemonset-z6z8l" [6f033fa5-926c-4d73-b45a-1566a992e73d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0130 21:02:29.316412  648432 system_pods.go:61] "registry-kt65f" [5294b992-54aa-45df-96fb-08f9593167ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0130 21:02:29.316420  648432 system_pods.go:61] "registry-proxy-w5n4t" [dac229c0-d6b4-4672-a1b0-fd5785554894] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0130 21:02:29.316430  648432 system_pods.go:61] "snapshot-controller-58dbcc7b99-bxlk5" [6352eab3-eeb4-4981-ba72-419edabdcd46] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 21:02:29.316443  648432 system_pods.go:61] "snapshot-controller-58dbcc7b99-jclfl" [cee0c966-04e1-4eaa-8fd1-efa2949d47ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 21:02:29.316449  648432 system_pods.go:61] "storage-provisioner" [fcc04191-1d40-420e-a32d-f39ad0c10192] Running
	I0130 21:02:29.316458  648432 system_pods.go:61] "tiller-deploy-7b677967b9-6kl6h" [154851b3-bb81-4418-85bc-bd5eaa9f28b6] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0130 21:02:29.316466  648432 system_pods.go:74] duration metric: took 140.357623ms to wait for pod list to return data ...
	I0130 21:02:29.316481  648432 default_sa.go:34] waiting for default service account to be created ...
	I0130 21:02:29.424036  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:29.507041  648432 default_sa.go:45] found service account: "default"
	I0130 21:02:29.507077  648432 default_sa.go:55] duration metric: took 190.585119ms for default service account to be created ...
	I0130 21:02:29.507094  648432 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 21:02:29.577496  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:29.579591  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:29.643789  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:29.722537  648432 system_pods.go:86] 18 kube-system pods found
	I0130 21:02:29.722576  648432 system_pods.go:89] "coredns-5dd5756b68-tmkmx" [d1ca7975-cdc8-49df-a84d-a508a524e812] Running
	I0130 21:02:29.722587  648432 system_pods.go:89] "csi-hostpath-attacher-0" [dda6d8f4-fce1-4d41-97b1-de66bbb42a39] Running
	I0130 21:02:29.722600  648432 system_pods.go:89] "csi-hostpath-resizer-0" [ae351c66-a681-48bd-b865-064194362eb0] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0130 21:02:29.722611  648432 system_pods.go:89] "csi-hostpathplugin-jlxzp" [25d15045-eb02-482b-8fef-4aa31e90d93d] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0130 21:02:29.722620  648432 system_pods.go:89] "etcd-addons-444608" [e5285df6-d128-48c8-8793-76e3eff8ed47] Running
	I0130 21:02:29.722628  648432 system_pods.go:89] "kube-apiserver-addons-444608" [771b47bb-45b2-495b-b620-51ba6440ba37] Running
	I0130 21:02:29.722636  648432 system_pods.go:89] "kube-controller-manager-addons-444608" [7f76f9e9-076e-496e-8288-23f6d33d09c1] Running
	I0130 21:02:29.722645  648432 system_pods.go:89] "kube-ingress-dns-minikube" [1569e976-2932-44de-849e-d7c8d97d191f] Running
	I0130 21:02:29.722655  648432 system_pods.go:89] "kube-proxy-gwwh9" [a1e3cb19-0b4e-4f26-bcb7-42fe4679950d] Running
	I0130 21:02:29.722664  648432 system_pods.go:89] "kube-scheduler-addons-444608" [6de01537-185b-45bf-9d72-9281347a809a] Running
	I0130 21:02:29.722702  648432 system_pods.go:89] "metrics-server-7c66d45ddc-hjdhk" [0fffac63-471a-4ee4-bb41-90cc5a14c096] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 21:02:29.722723  648432 system_pods.go:89] "nvidia-device-plugin-daemonset-z6z8l" [6f033fa5-926c-4d73-b45a-1566a992e73d] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0130 21:02:29.722736  648432 system_pods.go:89] "registry-kt65f" [5294b992-54aa-45df-96fb-08f9593167ee] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0130 21:02:29.722752  648432 system_pods.go:89] "registry-proxy-w5n4t" [dac229c0-d6b4-4672-a1b0-fd5785554894] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0130 21:02:29.722784  648432 system_pods.go:89] "snapshot-controller-58dbcc7b99-bxlk5" [6352eab3-eeb4-4981-ba72-419edabdcd46] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 21:02:29.722807  648432 system_pods.go:89] "snapshot-controller-58dbcc7b99-jclfl" [cee0c966-04e1-4eaa-8fd1-efa2949d47ad] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0130 21:02:29.722816  648432 system_pods.go:89] "storage-provisioner" [fcc04191-1d40-420e-a32d-f39ad0c10192] Running
	I0130 21:02:29.722827  648432 system_pods.go:89] "tiller-deploy-7b677967b9-6kl6h" [154851b3-bb81-4418-85bc-bd5eaa9f28b6] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0130 21:02:29.722841  648432 system_pods.go:126] duration metric: took 215.733573ms to wait for k8s-apps to be running ...
	I0130 21:02:29.722859  648432 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:02:29.722923  648432 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:02:29.763915  648432 system_svc.go:56] duration metric: took 41.04147ms WaitForService to wait for kubelet.
	I0130 21:02:29.763959  648432 kubeadm.go:581] duration metric: took 40.811629094s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:02:29.763991  648432 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:02:29.921014  648432 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:02:29.921085  648432 node_conditions.go:123] node cpu capacity is 2
	I0130 21:02:29.921104  648432 node_conditions.go:105] duration metric: took 157.107046ms to run NodePressure ...
	I0130 21:02:29.921120  648432 start.go:228] waiting for startup goroutines ...
	I0130 21:02:29.929392  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:30.079243  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:30.081413  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:30.148963  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:30.408218  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:30.583077  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:30.592935  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:30.648271  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:30.908223  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:31.083337  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:31.087353  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:31.141003  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:31.408050  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:31.588685  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:31.589200  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:31.646288  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:31.912723  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:32.077597  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:32.082622  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:32.141927  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:32.408356  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:32.577060  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:32.583080  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:32.646972  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:32.908603  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:33.079689  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:33.082896  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:33.142671  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:33.409015  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:33.577877  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:33.580815  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:33.649685  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:33.910988  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:34.076710  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:34.080084  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:34.144942  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:34.408511  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:34.588351  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:34.591668  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:34.657150  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:34.907805  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:35.084343  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:35.087925  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:35.147086  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:35.409107  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:35.590501  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:35.594989  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:35.647795  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:35.908581  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:36.077659  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:36.081353  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:36.143478  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:36.599837  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:36.602097  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:36.602660  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:36.647196  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:36.909275  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:37.078358  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:37.085113  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:37.145378  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:37.408883  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:37.593718  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:37.594295  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:37.643461  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:37.908733  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:38.078931  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:38.081202  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:38.146353  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:38.407826  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:38.578113  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:38.580959  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:38.685503  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:38.916736  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:39.092612  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:39.096619  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:39.143316  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:39.408034  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:39.577980  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:39.580525  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:39.641328  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:39.907915  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:40.077650  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:40.079495  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:40.142481  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:40.407902  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:40.579366  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:40.581719  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:40.645197  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:40.908775  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:41.078100  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:41.079807  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:41.142393  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:41.407717  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:41.588489  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:41.589741  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:41.647603  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:41.909224  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:42.078324  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:42.080088  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:42.142146  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:42.407262  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:42.578491  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:42.580460  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:42.642795  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:42.908610  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:43.079461  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:43.081917  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:43.142343  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:43.408888  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:43.576926  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:43.580308  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:43.642341  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:44.157597  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:44.158435  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:44.160382  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:44.161706  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:44.408687  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:44.577889  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:44.581110  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:44.642320  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:44.910828  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:45.076262  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:45.082384  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:45.151069  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:45.408591  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:45.578018  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:45.579076  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:45.644568  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:45.911921  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:46.079265  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:46.080631  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:46.144716  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:46.409805  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:46.578015  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:46.580772  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:46.642843  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:46.908298  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:47.077601  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:47.078859  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:47.141939  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:47.409445  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:47.579987  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:47.581347  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:47.642232  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:47.907664  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:48.078011  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:48.079479  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:48.141663  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:48.408798  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:48.577658  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:48.579967  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:48.643240  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:48.907735  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:49.079114  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:49.080759  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:49.143234  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:49.408687  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:49.579222  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0130 21:02:49.581374  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:49.642112  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:49.914155  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:50.077767  648432 kapi.go:107] duration metric: took 52.00892088s to wait for kubernetes.io/minikube-addons=registry ...
	I0130 21:02:50.079439  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:50.149381  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:50.408692  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:50.580922  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:50.643818  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:50.908528  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:51.079863  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:51.146056  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:51.408917  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:51.579843  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:51.650416  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:51.917851  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:52.080747  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:52.170994  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:52.408270  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:52.580631  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:52.642733  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:52.908055  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:53.079737  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:53.141364  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:53.407416  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:53.579182  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:53.642411  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:53.908086  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:54.078488  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:54.149978  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:54.409504  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:54.579272  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:54.641796  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:54.908184  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:55.080539  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:55.152311  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:55.408149  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:55.579180  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:55.642696  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:55.909453  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:56.279933  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:56.281696  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:56.410458  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:56.579287  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:56.642487  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:56.907964  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:57.080507  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:57.143748  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:57.408127  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:57.582054  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:57.643581  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:57.912666  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:58.080575  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:58.141510  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:58.407979  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:58.579589  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:58.655618  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:58.907496  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:59.079414  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:59.142474  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:59.407213  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:02:59.578896  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:02:59.643142  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:02:59.910650  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:00.079551  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:00.144026  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:00.456324  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:00.578866  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:00.642338  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:00.908499  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:01.079432  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:01.142449  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:01.407890  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:01.580639  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:01.642725  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:01.913105  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:02.080198  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:02.142815  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:02.408610  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:02.579900  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:02.642177  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:02.907442  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:03.080332  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:03.146776  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:03.409267  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:03.584069  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:03.651212  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:03.915555  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:04.079487  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:04.157406  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:04.409234  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:04.583914  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:04.644723  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:04.909483  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:05.079826  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:05.142283  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:05.407178  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:05.595040  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:05.650195  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:05.931164  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:06.093448  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:06.143492  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:06.410114  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:06.585364  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:06.877729  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:06.925760  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:07.086739  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:07.159080  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:07.413802  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:07.581123  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:07.644910  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:07.912869  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:08.080036  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:08.145715  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:08.414777  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:08.584546  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:08.648185  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:08.907036  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:09.083313  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:09.141604  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:09.415805  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:09.579189  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:09.642303  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:09.914778  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:10.079976  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:10.142409  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:10.407622  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:10.582028  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:10.641906  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:10.909162  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:11.078410  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:11.141958  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:11.409079  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:11.580253  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:11.642775  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:11.909888  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:12.082825  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:12.142267  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:12.418807  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0130 21:03:12.582990  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:12.642555  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:12.907729  648432 kapi.go:107] duration metric: took 1m11.004256659s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0130 21:03:12.909829  648432 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-444608 cluster.
	I0130 21:03:12.911468  648432 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0130 21:03:12.912983  648432 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0130 21:03:13.080172  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:13.142677  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:13.579727  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:13.642196  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:14.084580  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:14.151593  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:14.579081  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:14.642087  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:15.079276  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:15.142186  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:15.579365  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:15.643980  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:16.079485  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:16.147382  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:16.579642  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:16.656261  648432 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0130 21:03:17.079054  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:17.142410  648432 kapi.go:107] duration metric: took 1m18.006780834s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0130 21:03:17.578516  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:18.080690  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:18.579524  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:19.078407  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:19.578656  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:20.079263  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:20.581089  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:21.078541  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:21.579302  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:22.079361  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:22.579572  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:23.079792  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:23.579311  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:24.079098  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:24.579964  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:25.079265  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:25.578434  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:26.080610  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:26.584050  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:27.198215  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:27.580536  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:28.078951  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:28.578561  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:29.078617  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:29.579071  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:30.079718  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:30.579626  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:31.079311  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:31.579602  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:32.079055  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:32.580215  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:33.079190  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:33.578963  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:34.080020  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:34.578861  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:35.079281  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:35.579691  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:36.078983  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:36.579725  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:37.078650  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:37.579535  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:38.079770  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:38.580688  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:39.080354  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:39.578429  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:40.078927  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:40.579378  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:41.081014  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:41.579336  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:42.080818  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:42.580244  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:43.078208  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:43.578401  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:44.079084  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:44.579826  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:45.080054  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:45.579945  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:46.080123  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:46.580223  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:47.078508  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:47.578745  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:48.079224  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:48.578862  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:49.079796  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:49.578828  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:50.079066  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:50.579630  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:51.079382  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:51.578926  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:52.079748  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:52.579851  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:53.079650  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:53.579543  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:54.079423  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:54.580604  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:55.078914  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:55.580328  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:56.079137  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:56.578933  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:57.080558  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:57.579688  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:58.079674  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:58.580062  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:59.079388  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:03:59.578845  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:00.079240  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:00.579280  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:01.078841  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:01.579390  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:02.078702  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:02.579148  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:03.078615  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:03.579466  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:04.079551  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:04.579237  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:05.078706  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:05.579580  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:06.078901  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:06.580771  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:07.079337  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:07.578865  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:08.079592  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:08.579631  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:09.079032  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:09.580998  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:10.079403  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:10.578882  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:11.079867  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:11.581916  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:12.079906  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:12.580156  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:13.079021  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:13.579906  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:14.080238  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:14.579470  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:15.078792  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:15.580053  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:16.080260  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:16.582829  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:17.081490  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:17.579312  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:18.079257  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:18.578926  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:19.078918  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:19.580259  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:20.079132  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:20.578539  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:21.081322  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:21.579511  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:22.079298  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:22.579115  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:23.079503  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:23.578824  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:24.081161  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:24.579423  648432 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0130 21:04:25.080873  648432 kapi.go:107] duration metric: took 2m27.007000207s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0130 21:04:25.082771  648432 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, storage-provisioner, ingress-dns, storage-provisioner-rancher, inspektor-gadget, helm-tiller, nvidia-device-plugin, metrics-server, yakd, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0130 21:04:25.084204  648432 addons.go:505] enable addons completed in 2m36.70993432s: enabled=[default-storageclass cloud-spanner storage-provisioner ingress-dns storage-provisioner-rancher inspektor-gadget helm-tiller nvidia-device-plugin metrics-server yakd volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0130 21:04:25.084252  648432 start.go:233] waiting for cluster config update ...
	I0130 21:04:25.084272  648432 start.go:242] writing updated cluster config ...
	I0130 21:04:25.084556  648432 ssh_runner.go:195] Run: rm -f paused
	I0130 21:04:25.141349  648432 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 21:04:25.143150  648432 out.go:177] * Done! kubectl is now configured to use "addons-444608" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 21:01:02 UTC, ends at Tue 2024-01-30 21:07:31 UTC. --
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.895235405Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6af0a596-8110-40a8-aebe-857053f079aa name=/runtime.v1.RuntimeService/Version
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.896210052Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=79d36b5a-17fc-418d-a2ad-ecdc6fba3b9d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.897585593Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648850897569861,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=79d36b5a-17fc-418d-a2ad-ecdc6fba3b9d name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.898268635Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=50c93296-73c2-4ff1-8284-2c0cce71642d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.898350460Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=50c93296-73c2-4ff1-8284-2c0cce71642d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.899899918Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04d4f719c4999c487f3329529b7a5a68628de670b4b149e53add5b419830669f,PodSandboxId:34c21f74a0112ccf6ff0028cf4a7ec8a666b9f800afb8ef7136a2af3474f1b00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706648842932718914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-q5cdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a268081-26a4-493c-8ef5-949abe27802a,},Annotations:map[string]string{io.kubernetes.container.hash: f12696b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f01a2b4c842e2b4a87dd2f8eaf3c61e42b3ee2320c540dfdd1ffcd69929d62,PodSandboxId:79da5bfcd78c34a8e5ad435c296d62995524ab3d602a7d09721a4b379e281717,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706648710458067972,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-2hbhw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f59807c3-31fe-4692-8bb3-ff395c694341,},An
notations:map[string]string{io.kubernetes.container.hash: f4fb525c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484aef6de2ec6d72c368c8f505efd6c10375fb85afaeec32f03077af57c1b243,PodSandboxId:c770e4cb3cfc542e3f243e0aeb2b876c7dc3427a3c4d01e6fc8b14157a08b06d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706648703203392130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 26f76318-e1c6-4db9-8edd-412294dd7aa8,},Annotations:map[string]string{io.kubernetes.container.hash: a1deaff6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf5e6c2d6dc773d15b4b8cc85dae2ee158158fb615322f47535a4af2e327f5e,PodSandboxId:b59ddecf5c5edea1ca7ada2a372cd272d9812b184b2aeb163a73f962e54db4e5,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648595947837337,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-f2x7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bad8264,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a3e19bd9fcdf4c6de4781e80b4026ff6bc1ef3691f180665f396e910a0f59c,PodSandboxId:78538a5da7bde9f7933660d48f36e723eacf0ce4714278775049784e9fdeb74e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706648591989013947,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ldw8p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b80d5952-63e5-433f-a5e6-83b20da6158f,},Annotations:map[string]string{io.kubernetes.container.hash: 791a1087,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a082e476a3f4dfc5ebeb78c5a6a7b6840de20127bdc5edcf075c9912dab420f,PodSandboxId:776b9ae166d5037cc18508e876cae4d8cca88e78056f472111549e012b05af14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648583283609977,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p44xt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,},Annotations:map[string]string{io.kubernetes.container.hash: 95cb302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3664f23a39aa8238c3ccd1f8f813ff563e426560349073097647545feeb786,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706648565522654452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f6e333eb7de13d95474e41bbec4f7102a35c42b72112d063d3556070db0df9,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706648532292882152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deef12fad3fc95079c65f6e36cfd8c9d492356d13d8fa8d2c866997b76f0b9f6,PodSandboxId:cf9662f4d98c6a7d9bd77fe5e8fe43a550f2a27db748f38846caae57b263db9a,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706648529952846828,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-zb455,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2c78c5d5-0c99-43ad-9734-3555e64782bf,},Annotations:map[string]string{io.kubernetes.container.hash: f5e01f37,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a894455cabbcb1df9b0a9a45a13e46fee4795d0105a2c286947b6b405e97be,PodSandboxId:5b3c8edb2e18c05f435f98c194b71b1e655daaa6b5a0563a65ccbb0c61c97952,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706648526892415187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwwh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,},Annotations:map[string]string{io.kubernetes.container.hash: d63cc2c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd556e5f803ce19a2dc76ea52d2af2c37c1a7ecc8c64281245af603e093be37,PodSandboxId:0eb1d83e8ae8297ae8ac9bf14401a32d9ca040704cdb0f1512c56efb0bc0a8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706648512272340112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tmkmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1ca7975-cdc8-49df-a84d-a508a524e812,},Annotations:map[string]string{io.kubernetes.container.hash: 149fc8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f231588d9af74fc3c19f1b0ecaf9366f422aba2c5594e37230172022b2229505,PodSandboxId:37df5b2890038ccb43d65f0c5ad35b7ff46c704b26750acd6ed8bddc8a0a8e55,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706648488687187631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f4fbe4a95d3603c2e96988dd606da2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1b81cdc3c5090362c77ffb9651f4f232080cc02b9b8fea8a5a65e4d554af6,PodSandboxId:14507a0545166eadbdfae74a5b8df18f602814ca5396860d5ed58bdbd50a8aeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706648488258003583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 184376adb1bc21476d5afd6fddaa3eea,},Annotations:map[string]string{io.kubernetes.container.hash: 39ee09ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ad245698646202d5fe5fc72f2b1a4284b6c078d51cf2742742ef086777a05f,PodSandboxId:79ac25a42da1c54dcee4007c0dfa103c278f2f570957027dfc0d58b00df16ca4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab96
9ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706648488102208505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea6a14b86fe02cadd8e13231b8e6134,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9abc940f285c816ff2a44571995f1704be9599053f587f32c4f0c512675369,PodSandboxId:c71145f4387a057f2e09148ffbaf35a5fd0759ab192032418f525aefdebd0c5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706648487893267355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b5b355a60008d74272a0482ef3baa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8466dab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=50c93296-73c2-4ff1-8284-2c0cce71642d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.944409987Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=4021eb71-1730-4ee0-a312-2748044c0aa7 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.944577270Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=4021eb71-1730-4ee0-a312-2748044c0aa7 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.945959340Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=85e7c3ee-2364-43bc-96ab-fa0bce394f6a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.947226333Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648850947207149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=85e7c3ee-2364-43bc-96ab-fa0bce394f6a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.948184503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d87aca6e-bab8-4117-917b-c179d152954a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.948264806Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d87aca6e-bab8-4117-917b-c179d152954a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.948637309Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04d4f719c4999c487f3329529b7a5a68628de670b4b149e53add5b419830669f,PodSandboxId:34c21f74a0112ccf6ff0028cf4a7ec8a666b9f800afb8ef7136a2af3474f1b00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706648842932718914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-q5cdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a268081-26a4-493c-8ef5-949abe27802a,},Annotations:map[string]string{io.kubernetes.container.hash: f12696b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f01a2b4c842e2b4a87dd2f8eaf3c61e42b3ee2320c540dfdd1ffcd69929d62,PodSandboxId:79da5bfcd78c34a8e5ad435c296d62995524ab3d602a7d09721a4b379e281717,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706648710458067972,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-2hbhw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f59807c3-31fe-4692-8bb3-ff395c694341,},An
notations:map[string]string{io.kubernetes.container.hash: f4fb525c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484aef6de2ec6d72c368c8f505efd6c10375fb85afaeec32f03077af57c1b243,PodSandboxId:c770e4cb3cfc542e3f243e0aeb2b876c7dc3427a3c4d01e6fc8b14157a08b06d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706648703203392130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 26f76318-e1c6-4db9-8edd-412294dd7aa8,},Annotations:map[string]string{io.kubernetes.container.hash: a1deaff6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf5e6c2d6dc773d15b4b8cc85dae2ee158158fb615322f47535a4af2e327f5e,PodSandboxId:b59ddecf5c5edea1ca7ada2a372cd272d9812b184b2aeb163a73f962e54db4e5,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648595947837337,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-f2x7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bad8264,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a3e19bd9fcdf4c6de4781e80b4026ff6bc1ef3691f180665f396e910a0f59c,PodSandboxId:78538a5da7bde9f7933660d48f36e723eacf0ce4714278775049784e9fdeb74e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706648591989013947,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ldw8p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b80d5952-63e5-433f-a5e6-83b20da6158f,},Annotations:map[string]string{io.kubernetes.container.hash: 791a1087,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a082e476a3f4dfc5ebeb78c5a6a7b6840de20127bdc5edcf075c9912dab420f,PodSandboxId:776b9ae166d5037cc18508e876cae4d8cca88e78056f472111549e012b05af14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648583283609977,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p44xt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,},Annotations:map[string]string{io.kubernetes.container.hash: 95cb302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3664f23a39aa8238c3ccd1f8f813ff563e426560349073097647545feeb786,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706648565522654452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f6e333eb7de13d95474e41bbec4f7102a35c42b72112d063d3556070db0df9,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706648532292882152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deef12fad3fc95079c65f6e36cfd8c9d492356d13d8fa8d2c866997b76f0b9f6,PodSandboxId:cf9662f4d98c6a7d9bd77fe5e8fe43a550f2a27db748f38846caae57b263db9a,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706648529952846828,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-zb455,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2c78c5d5-0c99-43ad-9734-3555e64782bf,},Annotations:map[string]string{io.kubernetes.container.hash: f5e01f37,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a894455cabbcb1df9b0a9a45a13e46fee4795d0105a2c286947b6b405e97be,PodSandboxId:5b3c8edb2e18c05f435f98c194b71b1e655daaa6b5a0563a65ccbb0c61c97952,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706648526892415187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwwh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,},Annotations:map[string]string{io.kubernetes.container.hash: d63cc2c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd556e5f803ce19a2dc76ea52d2af2c37c1a7ecc8c64281245af603e093be37,PodSandboxId:0eb1d83e8ae8297ae8ac9bf14401a32d9ca040704cdb0f1512c56efb0bc0a8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706648512272340112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tmkmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1ca7975-cdc8-49df-a84d-a508a524e812,},Annotations:map[string]string{io.kubernetes.container.hash: 149fc8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f231588d9af74fc3c19f1b0ecaf9366f422aba2c5594e37230172022b2229505,PodSandboxId:37df5b2890038ccb43d65f0c5ad35b7ff46c704b26750acd6ed8bddc8a0a8e55,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706648488687187631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f4fbe4a95d3603c2e96988dd606da2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1b81cdc3c5090362c77ffb9651f4f232080cc02b9b8fea8a5a65e4d554af6,PodSandboxId:14507a0545166eadbdfae74a5b8df18f602814ca5396860d5ed58bdbd50a8aeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706648488258003583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 184376adb1bc21476d5afd6fddaa3eea,},Annotations:map[string]string{io.kubernetes.container.hash: 39ee09ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ad245698646202d5fe5fc72f2b1a4284b6c078d51cf2742742ef086777a05f,PodSandboxId:79ac25a42da1c54dcee4007c0dfa103c278f2f570957027dfc0d58b00df16ca4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab96
9ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706648488102208505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea6a14b86fe02cadd8e13231b8e6134,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9abc940f285c816ff2a44571995f1704be9599053f587f32c4f0c512675369,PodSandboxId:c71145f4387a057f2e09148ffbaf35a5fd0759ab192032418f525aefdebd0c5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706648487893267355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b5b355a60008d74272a0482ef3baa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8466dab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d87aca6e-bab8-4117-917b-c179d152954a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.970684319Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=b9c7b02a-0f62-401e-beef-6d3538a763bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.971166240Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:34c21f74a0112ccf6ff0028cf4a7ec8a666b9f800afb8ef7136a2af3474f1b00,Metadata:&PodSandboxMetadata{Name:hello-world-app-5d77478584-q5cdt,Uid:2a268081-26a4-493c-8ef5-949abe27802a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648840467757029,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-5d77478584-q5cdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a268081-26a4-493c-8ef5-949abe27802a,pod-template-hash: 5d77478584,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:07:20.122891952Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:79da5bfcd78c34a8e5ad435c296d62995524ab3d602a7d09721a4b379e281717,Metadata:&PodSandboxMetadata{Name:headlamp-7ddfbb94ff-2hbhw,Uid:f59807c3-31fe-4692-8bb3-ff395c694341,Namespace
:headlamp,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648703745918252,Labels:map[string]string{app.kubernetes.io/instance: headlamp,app.kubernetes.io/name: headlamp,io.kubernetes.container.name: POD,io.kubernetes.pod.name: headlamp-7ddfbb94ff-2hbhw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f59807c3-31fe-4692-8bb3-ff395c694341,pod-template-hash: 7ddfbb94ff,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:05:03.367149183Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c770e4cb3cfc542e3f243e0aeb2b876c7dc3427a3c4d01e6fc8b14157a08b06d,Metadata:&PodSandboxMetadata{Name:nginx,Uid:26f76318-e1c6-4db9-8edd-412294dd7aa8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648698555780079,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 26f76318-e1c6-4db9-8edd-412294dd7aa8,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-
01-30T21:04:58.214337393Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5fec10217d50849eead5d899668fc16080e6475485b44539164272202d39bb9d,Metadata:&PodSandboxMetadata{Name:ingress-nginx-controller-69cff4fd79-r8dzs,Uid:eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1706648655922995754,Labels:map[string]string{app.kubernetes.io/component: controller,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-controller-69cff4fd79-r8dzs,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31,pod-template-hash: 69cff4fd79,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:57.964085254Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:78538a5da7bde9f7933660d48f36e723eacf0ce4714278775049784e9fdeb74e,Metadata:&PodSandboxMetadata{Na
me:gcp-auth-d4c87556c-ldw8p,Uid:b80d5952-63e5-433f-a5e6-83b20da6158f,Namespace:gcp-auth,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648585732652372,Labels:map[string]string{app: gcp-auth,io.kubernetes.container.name: POD,io.kubernetes.pod.name: gcp-auth-d4c87556c-ldw8p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b80d5952-63e5-433f-a5e6-83b20da6158f,kubernetes.io/minikube-addons: gcp-auth,pod-template-hash: d4c87556c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:02:01.514858402Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:776b9ae166d5037cc18508e876cae4d8cca88e78056f472111549e012b05af14,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-create-p44xt,Uid:0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1706648520043612863,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kub
ernetes.io/controller-uid: 6dac3116-67f5-42ad-b2ae-febb9912ac33,batch.kubernetes.io/job-name: ingress-nginx-admission-create,controller-uid: 6dac3116-67f5-42ad-b2ae-febb9912ac33,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-create-p44xt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,job-name: ingress-nginx-admission-create,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:58.019699648Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b59ddecf5c5edea1ca7ada2a372cd272d9812b184b2aeb163a73f962e54db4e5,Metadata:&PodSandboxMetadata{Name:ingress-nginx-admission-patch-f2x7w,Uid:4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,Namespace:ingress-nginx,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1706648519730826655,Labels:map[string]string{app.kubernetes.io/component: admission-webhook,app.kubernetes.io/instance: ingress-nginx,app.kubernetes.io/name: ingress-nginx,batch.kubernetes.io/controller-u
id: f803458e-f2ae-49d6-8f0d-36ab2bbb5751,batch.kubernetes.io/job-name: ingress-nginx-admission-patch,controller-uid: f803458e-f2ae-49d6-8f0d-36ab2bbb5751,io.kubernetes.container.name: POD,io.kubernetes.pod.name: ingress-nginx-admission-patch-f2x7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,job-name: ingress-nginx-admission-patch,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:58.022195161Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf9662f4d98c6a7d9bd77fe5e8fe43a550f2a27db748f38846caae57b263db9a,Metadata:&PodSandboxMetadata{Name:yakd-dashboard-9947fc6bf-zb455,Uid:2c78c5d5-0c99-43ad-9734-3555e64782bf,Namespace:yakd-dashboard,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648518376648214,Labels:map[string]string{app.kubernetes.io/instance: yakd-dashboard,app.kubernetes.io/name: yakd-dashboard,gcp-auth-skip-secret: true,io.kubernetes.container.name: POD,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-
zb455,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2c78c5d5-0c99-43ad-9734-3555e64782bf,pod-template-hash: 9947fc6bf,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:56.829080069Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:fcc04191-1d40-420e-a32d-f39ad0c10192,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648517027789269,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mo
de\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-01-30T21:01:56.686801751Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dca3302051415e14252933328b992f9a708c5071bf718683f1201b843251f57a,Metadata:&PodSandboxMetadata{Name:kube-ingress-dns-minikube,Uid:1569e976-2932-44de-849e-d7c8d97d191f,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1706648516413240377,Labels:map[string]string{app: minikube-ingress-dns,app.kubernetes.io/part-of: kube-system,io.kubern
etes.container.name: POD,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1569e976-2932-44de-849e-d7c8d97d191f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"app\":\"minikube-ingress-dns\",\"app.kubernetes.io/part-of\":\"kube-system\"},\"name\":\"kube-ingress-dns-minikube\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"DNS_PORT\",\"value\":\"53\"},{\"name\":\"POD_IP\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"status.podIP\"}}}],\"image\":\"gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"minikube-ingress-dns\",\"ports\":[{\"containerPort\":53,\"protocol\":\"UDP\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"minikube-ingress-dns\"}}\n,kubernetes.io/config.seen: 2024-01
-30T21:01:55.773756074Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5b3c8edb2e18c05f435f98c194b71b1e655daaa6b5a0563a65ccbb0c61c97952,Metadata:&PodSandboxMetadata{Name:kube-proxy-gwwh9,Uid:a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648508660614781,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-gwwh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:48.222864635Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:0eb1d83e8ae8297ae8ac9bf14401a32d9ca040704cdb0f1512c56efb0bc0a8a1,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-tmkmx,Uid:d1ca7975-cdc8-49df-a84d-a508a524e812,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648508573100295
,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-tmkmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1ca7975-cdc8-49df-a84d-a508a524e812,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T21:01:48.182326423Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c71145f4387a057f2e09148ffbaf35a5fd0759ab192032418f525aefdebd0c5e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-444608,Uid:b1b5b355a60008d74272a0482ef3baa4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648487446355789,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b5b355a60008d74272a0482ef3baa4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168
.39.85:8443,kubernetes.io/config.hash: b1b5b355a60008d74272a0482ef3baa4,kubernetes.io/config.seen: 2024-01-30T21:01:26.900888636Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:37df5b2890038ccb43d65f0c5ad35b7ff46c704b26750acd6ed8bddc8a0a8e55,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-444608,Uid:e0f4fbe4a95d3603c2e96988dd606da2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648487441748519,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f4fbe4a95d3603c2e96988dd606da2,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e0f4fbe4a95d3603c2e96988dd606da2,kubernetes.io/config.seen: 2024-01-30T21:01:26.900890493Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:79ac25a42da1c54dcee4007c0dfa103c278f2f570957027dfc0d58b00df16ca4,Metadata:&PodSandboxMetadata{Name:kub
e-controller-manager-addons-444608,Uid:7ea6a14b86fe02cadd8e13231b8e6134,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648487435835165,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea6a14b86fe02cadd8e13231b8e6134,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7ea6a14b86fe02cadd8e13231b8e6134,kubernetes.io/config.seen: 2024-01-30T21:01:26.900889785Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:14507a0545166eadbdfae74a5b8df18f602814ca5396860d5ed58bdbd50a8aeb,Metadata:&PodSandboxMetadata{Name:etcd-addons-444608,Uid:184376adb1bc21476d5afd6fddaa3eea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706648487371214437,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-444608,io.kubernetes.pod.n
amespace: kube-system,io.kubernetes.pod.uid: 184376adb1bc21476d5afd6fddaa3eea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.85:2379,kubernetes.io/config.hash: 184376adb1bc21476d5afd6fddaa3eea,kubernetes.io/config.seen: 2024-01-30T21:01:26.900883939Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=b9c7b02a-0f62-401e-beef-6d3538a763bd name=/runtime.v1.RuntimeService/ListPodSandbox
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.972124914Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=49e40154-3240-4d2d-be19-ff4ba02fd975 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.972204504Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=49e40154-3240-4d2d-be19-ff4ba02fd975 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.972617206Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04d4f719c4999c487f3329529b7a5a68628de670b4b149e53add5b419830669f,PodSandboxId:34c21f74a0112ccf6ff0028cf4a7ec8a666b9f800afb8ef7136a2af3474f1b00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706648842932718914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-q5cdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a268081-26a4-493c-8ef5-949abe27802a,},Annotations:map[string]string{io.kubernetes.container.hash: f12696b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f01a2b4c842e2b4a87dd2f8eaf3c61e42b3ee2320c540dfdd1ffcd69929d62,PodSandboxId:79da5bfcd78c34a8e5ad435c296d62995524ab3d602a7d09721a4b379e281717,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706648710458067972,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-2hbhw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f59807c3-31fe-4692-8bb3-ff395c694341,},An
notations:map[string]string{io.kubernetes.container.hash: f4fb525c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484aef6de2ec6d72c368c8f505efd6c10375fb85afaeec32f03077af57c1b243,PodSandboxId:c770e4cb3cfc542e3f243e0aeb2b876c7dc3427a3c4d01e6fc8b14157a08b06d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706648703203392130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 26f76318-e1c6-4db9-8edd-412294dd7aa8,},Annotations:map[string]string{io.kubernetes.container.hash: a1deaff6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf5e6c2d6dc773d15b4b8cc85dae2ee158158fb615322f47535a4af2e327f5e,PodSandboxId:b59ddecf5c5edea1ca7ada2a372cd272d9812b184b2aeb163a73f962e54db4e5,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648595947837337,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-f2x7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bad8264,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a3e19bd9fcdf4c6de4781e80b4026ff6bc1ef3691f180665f396e910a0f59c,PodSandboxId:78538a5da7bde9f7933660d48f36e723eacf0ce4714278775049784e9fdeb74e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706648591989013947,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ldw8p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b80d5952-63e5-433f-a5e6-83b20da6158f,},Annotations:map[string]string{io.kubernetes.container.hash: 791a1087,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a082e476a3f4dfc5ebeb78c5a6a7b6840de20127bdc5edcf075c9912dab420f,PodSandboxId:776b9ae166d5037cc18508e876cae4d8cca88e78056f472111549e012b05af14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648583283609977,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p44xt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,},Annotations:map[string]string{io.kubernetes.container.hash: 95cb302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3664f23a39aa8238c3ccd1f8f813ff563e426560349073097647545feeb786,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706648565522654452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f6e333eb7de13d95474e41bbec4f7102a35c42b72112d063d3556070db0df9,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706648532292882152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deef12fad3fc95079c65f6e36cfd8c9d492356d13d8fa8d2c866997b76f0b9f6,PodSandboxId:cf9662f4d98c6a7d9bd77fe5e8fe43a550f2a27db748f38846caae57b263db9a,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706648529952846828,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-zb455,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2c78c5d5-0c99-43ad-9734-3555e64782bf,},Annotations:map[string]string{io.kubernetes.container.hash: f5e01f37,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a894455cabbcb1df9b0a9a45a13e46fee4795d0105a2c286947b6b405e97be,PodSandboxId:5b3c8edb2e18c05f435f98c194b71b1e655daaa6b5a0563a65ccbb0c61c97952,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706648526892415187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwwh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,},Annotations:map[string]string{io.kubernetes.container.hash: d63cc2c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd556e5f803ce19a2dc76ea52d2af2c37c1a7ecc8c64281245af603e093be37,PodSandboxId:0eb1d83e8ae8297ae8ac9bf14401a32d9ca040704cdb0f1512c56efb0bc0a8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706648512272340112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tmkmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1ca7975-cdc8-49df-a84d-a508a524e812,},Annotations:map[string]string{io.kubernetes.container.hash: 149fc8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f231588d9af74fc3c19f1b0ecaf9366f422aba2c5594e37230172022b2229505,PodSandboxId:37df5b2890038ccb43d65f0c5ad35b7ff46c704b26750acd6ed8bddc8a0a8e55,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706648488687187631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f4fbe4a95d3603c2e96988dd606da2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1b81cdc3c5090362c77ffb9651f4f232080cc02b9b8fea8a5a65e4d554af6,PodSandboxId:14507a0545166eadbdfae74a5b8df18f602814ca5396860d5ed58bdbd50a8aeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706648488258003583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 184376adb1bc21476d5afd6fddaa3eea,},Annotations:map[string]string{io.kubernetes.container.hash: 39ee09ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ad245698646202d5fe5fc72f2b1a4284b6c078d51cf2742742ef086777a05f,PodSandboxId:79ac25a42da1c54dcee4007c0dfa103c278f2f570957027dfc0d58b00df16ca4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab96
9ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706648488102208505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea6a14b86fe02cadd8e13231b8e6134,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9abc940f285c816ff2a44571995f1704be9599053f587f32c4f0c512675369,PodSandboxId:c71145f4387a057f2e09148ffbaf35a5fd0759ab192032418f525aefdebd0c5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706648487893267355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b5b355a60008d74272a0482ef3baa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8466dab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=49e40154-3240-4d2d-be19-ff4ba02fd975 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.988177468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=77665324-7481-4a6d-abde-5dc3a1c562c5 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.988287202Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=77665324-7481-4a6d-abde-5dc3a1c562c5 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.989405213Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=bf20e46e-b01c-492c-b387-9754b796d6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.990778224Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706648850990762755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:575989,},InodesUsed:&UInt64Value{Value:233,},},},}" file="go-grpc-middleware/chain.go:25" id=bf20e46e-b01c-492c-b387-9754b796d6ed name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.991394236Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3b0e655d-b77e-4dcb-9d41-9a5036f9ed4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.991520101Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3b0e655d-b77e-4dcb-9d41-9a5036f9ed4a name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:07:30 addons-444608 crio[713]: time="2024-01-30 21:07:30.991830399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:04d4f719c4999c487f3329529b7a5a68628de670b4b149e53add5b419830669f,PodSandboxId:34c21f74a0112ccf6ff0028cf4a7ec8a666b9f800afb8ef7136a2af3474f1b00,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706648842932718914,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-q5cdt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2a268081-26a4-493c-8ef5-949abe27802a,},Annotations:map[string]string{io.kubernetes.container.hash: f12696b9,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88f01a2b4c842e2b4a87dd2f8eaf3c61e42b3ee2320c540dfdd1ffcd69929d62,PodSandboxId:79da5bfcd78c34a8e5ad435c296d62995524ab3d602a7d09721a4b379e281717,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67,State:CONTAINER_RUNNING,CreatedAt:1706648710458067972,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-7ddfbb94ff-2hbhw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: f59807c3-31fe-4692-8bb3-ff395c694341,},An
notations:map[string]string{io.kubernetes.container.hash: f4fb525c,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:484aef6de2ec6d72c368c8f505efd6c10375fb85afaeec32f03077af57c1b243,PodSandboxId:c770e4cb3cfc542e3f243e0aeb2b876c7dc3427a3c4d01e6fc8b14157a08b06d,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706648703203392130,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 26f76318-e1c6-4db9-8edd-412294dd7aa8,},Annotations:map[string]string{io.kubernetes.container.hash: a1deaff6,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbf5e6c2d6dc773d15b4b8cc85dae2ee158158fb615322f47535a4af2e327f5e,PodSandboxId:b59ddecf5c5edea1ca7ada2a372cd272d9812b184b2aeb163a73f962e54db4e5,Metadata:&ContainerMetadata{Name:patch,Attempt:3,},Image:&ImageSpec{Image:1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648595947837337,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: in
gress-nginx-admission-patch-f2x7w,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 4d1b2f03-a2a3-4e87-96c9-b5f7823baccb,},Annotations:map[string]string{io.kubernetes.container.hash: 5bad8264,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:53a3e19bd9fcdf4c6de4781e80b4026ff6bc1ef3691f180665f396e910a0f59c,PodSandboxId:78538a5da7bde9f7933660d48f36e723eacf0ce4714278775049784e9fdeb74e,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1706648591989013947,Labels:map[string]string{io.kubernetes.container.name:
gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-ldw8p,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: b80d5952-63e5-433f-a5e6-83b20da6158f,},Annotations:map[string]string{io.kubernetes.container.hash: 791a1087,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9a082e476a3f4dfc5ebeb78c5a6a7b6840de20127bdc5edcf075c9912dab420f,PodSandboxId:776b9ae166d5037cc18508e876cae4d8cca88e78056f472111549e012b05af14,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c59
65b9bd31a01ee385,State:CONTAINER_EXITED,CreatedAt:1706648583283609977,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-p44xt,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0fd51cae-fa0b-4f07-a9cf-e7a2558466a7,},Annotations:map[string]string{io.kubernetes.container.hash: 95cb302,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd3664f23a39aa8238c3ccd1f8f813ff563e426560349073097647545feeb786,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441
c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706648565522654452,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:06f6e333eb7de13d95474e41bbec4f7102a35c42b72112d063d3556070db0df9,PodSandboxId:4ca9bc3949acd81422664e3429b379ff23848f07f94a60a61e70a38668a9bbfc,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c
1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706648532292882152,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcc04191-1d40-420e-a32d-f39ad0c10192,},Annotations:map[string]string{io.kubernetes.container.hash: 2027264e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:deef12fad3fc95079c65f6e36cfd8c9d492356d13d8fa8d2c866997b76f0b9f6,PodSandboxId:cf9662f4d98c6a7d9bd77fe5e8fe43a550f2a27db748f38846caae57b263db9a,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},},ImageRef:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727
bccc0e39f9329310,State:CONTAINER_RUNNING,CreatedAt:1706648529952846828,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-zb455,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 2c78c5d5-0c99-43ad-9734-3555e64782bf,},Annotations:map[string]string{io.kubernetes.container.hash: f5e01f37,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15a894455cabbcb1df9b0a9a45a13e46fee4795d0105a2c286947b6b405e97be,PodSandboxId:5b3c8edb2e18c05f435f98c194b71b1e655daaa6b5a0563a65ccbb0c61c97952,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.
k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706648526892415187,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gwwh9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a1e3cb19-0b4e-4f26-bcb7-42fe4679950d,},Annotations:map[string]string{io.kubernetes.container.hash: d63cc2c2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5fd556e5f803ce19a2dc76ea52d2af2c37c1a7ecc8c64281245af603e093be37,PodSandboxId:0eb1d83e8ae8297ae8ac9bf14401a32d9ca040704cdb0f1512c56efb0bc0a8a1,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead066
51cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706648512272340112,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tmkmx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1ca7975-cdc8-49df-a84d-a508a524e812,},Annotations:map[string]string{io.kubernetes.container.hash: 149fc8b9,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f231588d9af74fc3c19f1b0ecaf9366f422aba2c5594e37230172022b2229505,PodSandboxId:37df5b2890038ccb43d65f0c5ad35b7ff46c704b26750acd6ed8bddc8a0a8e55,Metadata:&ContainerMetadata{Name:
kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706648488687187631,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0f4fbe4a95d3603c2e96988dd606da2,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ce1b81cdc3c5090362c77ffb9651f4f232080cc02b9b8fea8a5a65e4d554af6,PodSandboxId:14507a0545166eadbdfae74a5b8df18f602814ca5396860d5ed58bdbd50a8aeb,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,
},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706648488258003583,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 184376adb1bc21476d5afd6fddaa3eea,},Annotations:map[string]string{io.kubernetes.container.hash: 39ee09ba,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14ad245698646202d5fe5fc72f2b1a4284b6c078d51cf2742742ef086777a05f,PodSandboxId:79ac25a42da1c54dcee4007c0dfa103c278f2f570957027dfc0d58b00df16ca4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab96
9ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706648488102208505,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7ea6a14b86fe02cadd8e13231b8e6134,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cf9abc940f285c816ff2a44571995f1704be9599053f587f32c4f0c512675369,PodSandboxId:c71145f4387a057f2e09148ffbaf35a5fd0759ab192032418f525aefdebd0c5e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image
:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706648487893267355,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-444608,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b1b5b355a60008d74272a0482ef3baa4,},Annotations:map[string]string{io.kubernetes.container.hash: 8466dab6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3b0e655d-b77e-4dcb-9d41-9a5036f9ed4a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	04d4f719c4999       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   34c21f74a0112       hello-world-app-5d77478584-q5cdt
	88f01a2b4c842       ghcr.io/headlamp-k8s/headlamp@sha256:3c6da859a989f285b2fd2ac2f4763d1884d54a51e4405301e5324e0b2b70bd67                        2 minutes ago       Running             headlamp                  0                   79da5bfcd78c3       headlamp-7ddfbb94ff-2hbhw
	484aef6de2ec6       docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25                              2 minutes ago       Running             nginx                     0                   c770e4cb3cfc5       nginx
	cbf5e6c2d6dc7       1ebff0f9671bc015dc340b12c5bf6f3dbda7d0a8b5332bd095f21bd52e1b30fb                                                             4 minutes ago       Exited              patch                     3                   b59ddecf5c5ed       ingress-nginx-admission-patch-f2x7w
	53a3e19bd9fcd       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                 4 minutes ago       Running             gcp-auth                  0                   78538a5da7bde       gcp-auth-d4c87556c-ldw8p
	9a082e476a3f4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:29318c6957228dc10feb67fed5b91bdd8a9e3279e5b29c5965b9bd31a01ee385   4 minutes ago       Exited              create                    0                   776b9ae166d50       ingress-nginx-admission-create-p44xt
	fd3664f23a39a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       1                   4ca9bc3949acd       storage-provisioner
	06f6e333eb7de       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             5 minutes ago       Exited              storage-provisioner       0                   4ca9bc3949acd       storage-provisioner
	deef12fad3fc9       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              5 minutes ago       Running             yakd                      0                   cf9662f4d98c6       yakd-dashboard-9947fc6bf-zb455
	15a894455cabb       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             5 minutes ago       Running             kube-proxy                0                   5b3c8edb2e18c       kube-proxy-gwwh9
	5fd556e5f803c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             5 minutes ago       Running             coredns                   0                   0eb1d83e8ae82       coredns-5dd5756b68-tmkmx
	f231588d9af74       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             6 minutes ago       Running             kube-scheduler            0                   37df5b2890038       kube-scheduler-addons-444608
	1ce1b81cdc3c5       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             6 minutes ago       Running             etcd                      0                   14507a0545166       etcd-addons-444608
	14ad245698646       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             6 minutes ago       Running             kube-controller-manager   0                   79ac25a42da1c       kube-controller-manager-addons-444608
	cf9abc940f285       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             6 minutes ago       Running             kube-apiserver            0                   c71145f4387a0       kube-apiserver-addons-444608
	
	
	==> coredns [5fd556e5f803ce19a2dc76ea52d2af2c37c1a7ecc8c64281245af603e093be37] <==
	[INFO] 10.244.0.9:38071 - 53914 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108986s
	[INFO] 10.244.0.9:46305 - 26179 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000174648s
	[INFO] 10.244.0.9:46305 - 23616 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000232709s
	[INFO] 10.244.0.9:54095 - 14821 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00011496s
	[INFO] 10.244.0.9:54095 - 42727 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000146357s
	[INFO] 10.244.0.9:52466 - 33647 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000058943s
	[INFO] 10.244.0.9:52466 - 22385 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000103027s
	[INFO] 10.244.0.9:40280 - 43486 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000179367s
	[INFO] 10.244.0.9:40280 - 19675 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000091477s
	[INFO] 10.244.0.9:44325 - 10040 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000050088s
	[INFO] 10.244.0.9:44325 - 44853 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000111599s
	[INFO] 10.244.0.9:54331 - 16148 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005873s
	[INFO] 10.244.0.9:54331 - 21015 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000080146s
	[INFO] 10.244.0.9:54343 - 61338 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000082856s
	[INFO] 10.244.0.9:54343 - 63132 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000074235s
	[INFO] 10.244.0.21:57333 - 40221 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000309956s
	[INFO] 10.244.0.21:35311 - 41334 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000105846s
	[INFO] 10.244.0.21:50556 - 11182 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000132402s
	[INFO] 10.244.0.21:40053 - 46274 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000083178s
	[INFO] 10.244.0.21:48013 - 27735 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00009241s
	[INFO] 10.244.0.21:38011 - 21142 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.00011642s
	[INFO] 10.244.0.21:41225 - 33163 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.001046929s
	[INFO] 10.244.0.21:59053 - 8550 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001326874s
	[INFO] 10.244.0.26:58159 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000713645s
	[INFO] 10.244.0.26:45961 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000221645s
	
	
	==> describe nodes <==
	Name:               addons-444608
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-444608
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=addons-444608
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T21_01_36_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-444608
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 21:01:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-444608
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 21:07:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 21:05:42 +0000   Tue, 30 Jan 2024 21:01:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 21:05:42 +0000   Tue, 30 Jan 2024 21:01:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 21:05:42 +0000   Tue, 30 Jan 2024 21:01:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 21:05:42 +0000   Tue, 30 Jan 2024 21:01:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.85
	  Hostname:    addons-444608
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914496Ki
	  pods:               110
	System Info:
	  Machine ID:                 cc6cc1f4359e47f086fcf86cc016c441
	  System UUID:                cc6cc1f4-359e-47f0-86fc-f86cc016c441
	  Boot ID:                    999d228e-0017-47e5-b64f-fa76a2dc6051
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-q5cdt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-d4c87556c-ldw8p                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m30s
	  headlamp                    headlamp-7ddfbb94ff-2hbhw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m28s
	  kube-system                 coredns-5dd5756b68-tmkmx                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     5m43s
	  kube-system                 etcd-addons-444608                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         5m56s
	  kube-system                 kube-apiserver-addons-444608             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m55s
	  kube-system                 kube-controller-manager-addons-444608    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	  kube-system                 kube-proxy-gwwh9                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m43s
	  kube-system                 kube-scheduler-addons-444608             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m58s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m35s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-zb455           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     5m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 5m19s                kube-proxy       
	  Normal  Starting                 6m5s                 kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m5s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m4s (x8 over 6m5s)  kubelet          Node addons-444608 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m4s (x8 over 6m5s)  kubelet          Node addons-444608 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m4s (x7 over 6m5s)  kubelet          Node addons-444608 status is now: NodeHasSufficientPID
	  Normal  Starting                 5m56s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m56s                kubelet          Node addons-444608 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m56s                kubelet          Node addons-444608 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m56s                kubelet          Node addons-444608 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                5m55s                kubelet          Node addons-444608 status is now: NodeReady
	  Normal  RegisteredNode           5m44s                node-controller  Node addons-444608 event: Registered Node addons-444608 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.859602] systemd-fstab-generator[638]: Ignoring "noauto" for root device
	[  +0.105118] systemd-fstab-generator[649]: Ignoring "noauto" for root device
	[  +0.137469] systemd-fstab-generator[662]: Ignoring "noauto" for root device
	[  +0.095923] systemd-fstab-generator[673]: Ignoring "noauto" for root device
	[  +0.244247] systemd-fstab-generator[697]: Ignoring "noauto" for root device
	[ +10.549995] systemd-fstab-generator[907]: Ignoring "noauto" for root device
	[  +9.275609] systemd-fstab-generator[1245]: Ignoring "noauto" for root device
	[Jan30 21:02] kauditd_printk_skb: 69 callbacks suppressed
	[  +8.998385] kauditd_printk_skb: 4 callbacks suppressed
	[ +16.493779] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.969268] kauditd_printk_skb: 18 callbacks suppressed
	[Jan30 21:03] kauditd_printk_skb: 7 callbacks suppressed
	[  +6.061307] kauditd_printk_skb: 24 callbacks suppressed
	[ +45.122732] kauditd_printk_skb: 18 callbacks suppressed
	[Jan30 21:04] kauditd_printk_skb: 18 callbacks suppressed
	[ +20.432340] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.764032] kauditd_printk_skb: 11 callbacks suppressed
	[  +5.103929] kauditd_printk_skb: 45 callbacks suppressed
	[  +6.837622] kauditd_printk_skb: 10 callbacks suppressed
	[  +6.232473] kauditd_printk_skb: 10 callbacks suppressed
	[Jan30 21:05] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.909659] kauditd_printk_skb: 4 callbacks suppressed
	[Jan30 21:07] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.798296] kauditd_printk_skb: 7 callbacks suppressed
	
	
	==> etcd [1ce1b81cdc3c5090362c77ffb9651f4f232080cc02b9b8fea8a5a65e4d554af6] <==
	{"level":"info","ts":"2024-01-30T21:03:06.860423Z","caller":"traceutil/trace.go:171","msg":"trace[1330600706] linearizableReadLoop","detail":"{readStateIndex:1154; appliedIndex:1154; }","duration":"252.803199ms","start":"2024-01-30T21:03:06.607611Z","end":"2024-01-30T21:03:06.860415Z","steps":["trace[1330600706] 'read index received'  (duration: 252.799571ms)","trace[1330600706] 'applied index is now lower than readState.Index'  (duration: 2.91µs)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T21:03:06.865585Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"229.538834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:82441"}
	{"level":"info","ts":"2024-01-30T21:03:06.865686Z","caller":"traceutil/trace.go:171","msg":"trace[555023563] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1123; }","duration":"229.641326ms","start":"2024-01-30T21:03:06.636035Z","end":"2024-01-30T21:03:06.865676Z","steps":["trace[555023563] 'agreement among raft nodes before linearized reading'  (duration: 229.358998ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:03:06.865248Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"257.591974ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-30T21:03:06.866384Z","caller":"traceutil/trace.go:171","msg":"trace[461729510] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1123; }","duration":"258.729898ms","start":"2024-01-30T21:03:06.607588Z","end":"2024-01-30T21:03:06.866317Z","steps":["trace[461729510] 'agreement among raft nodes before linearized reading'  (duration: 257.55371ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:03:27.182376Z","caller":"traceutil/trace.go:171","msg":"trace[1898749249] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"199.743377ms","start":"2024-01-30T21:03:26.98261Z","end":"2024-01-30T21:03:27.182353Z","steps":["trace[1898749249] 'process raft request'  (duration: 199.324183ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:03:27.18296Z","caller":"traceutil/trace.go:171","msg":"trace[1803449213] linearizableReadLoop","detail":"{readStateIndex:1263; appliedIndex:1262; }","duration":"164.41009ms","start":"2024-01-30T21:03:27.01854Z","end":"2024-01-30T21:03:27.18295Z","steps":["trace[1803449213] 'read index received'  (duration: 163.29116ms)","trace[1803449213] 'applied index is now lower than readState.Index'  (duration: 1.11795ms)"],"step_count":2}
	{"level":"warn","ts":"2024-01-30T21:03:27.183145Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"164.613785ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2024-01-30T21:03:27.183218Z","caller":"traceutil/trace.go:171","msg":"trace[527437900] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1228; }","duration":"164.693694ms","start":"2024-01-30T21:03:27.018512Z","end":"2024-01-30T21:03:27.183206Z","steps":["trace[527437900] 'agreement among raft nodes before linearized reading'  (duration: 164.579995ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:03:27.183353Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"154.562719ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T21:03:27.189328Z","caller":"traceutil/trace.go:171","msg":"trace[729151596] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1228; }","duration":"160.532429ms","start":"2024-01-30T21:03:27.028782Z","end":"2024-01-30T21:03:27.189314Z","steps":["trace[729151596] 'agreement among raft nodes before linearized reading'  (duration: 154.547273ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:03:27.18398Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.692702ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13861"}
	{"level":"info","ts":"2024-01-30T21:03:27.18984Z","caller":"traceutil/trace.go:171","msg":"trace[2100166376] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1228; }","duration":"116.553958ms","start":"2024-01-30T21:03:27.073275Z","end":"2024-01-30T21:03:27.189829Z","steps":["trace[2100166376] 'agreement among raft nodes before linearized reading'  (duration: 110.632733ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:04:33.680065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"131.397939ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9443"}
	{"level":"warn","ts":"2024-01-30T21:04:33.680251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.924774ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:9443"}
	{"level":"info","ts":"2024-01-30T21:04:33.680321Z","caller":"traceutil/trace.go:171","msg":"trace[109258617] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1384; }","duration":"112.001221ms","start":"2024-01-30T21:04:33.568309Z","end":"2024-01-30T21:04:33.68031Z","steps":["trace[109258617] 'range keys from in-memory index tree'  (duration: 111.816109ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:04:33.680255Z","caller":"traceutil/trace.go:171","msg":"trace[2106240525] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1384; }","duration":"131.619378ms","start":"2024-01-30T21:04:33.548619Z","end":"2024-01-30T21:04:33.680238Z","steps":["trace[2106240525] 'range keys from in-memory index tree'  (duration: 131.226328ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:04:36.221924Z","caller":"traceutil/trace.go:171","msg":"trace[1175052145] linearizableReadLoop","detail":"{readStateIndex:1440; appliedIndex:1439; }","duration":"250.305905ms","start":"2024-01-30T21:04:35.971606Z","end":"2024-01-30T21:04:36.221912Z","steps":["trace[1175052145] 'read index received'  (duration: 250.175081ms)","trace[1175052145] 'applied index is now lower than readState.Index'  (duration: 130.368µs)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T21:04:36.222247Z","caller":"traceutil/trace.go:171","msg":"trace[1875408782] transaction","detail":"{read_only:false; response_revision:1390; number_of_response:1; }","duration":"255.381855ms","start":"2024-01-30T21:04:35.966847Z","end":"2024-01-30T21:04:36.222229Z","steps":["trace[1875408782] 'process raft request'  (duration: 254.978657ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:04:36.222505Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.031052ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T21:04:36.222587Z","caller":"traceutil/trace.go:171","msg":"trace[97672741] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1390; }","duration":"251.171477ms","start":"2024-01-30T21:04:35.971381Z","end":"2024-01-30T21:04:36.222553Z","steps":["trace[97672741] 'agreement among raft nodes before linearized reading'  (duration: 251.005359ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T21:04:36.222774Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.613624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T21:04:36.222818Z","caller":"traceutil/trace.go:171","msg":"trace[2129257327] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1390; }","duration":"198.661075ms","start":"2024-01-30T21:04:36.024151Z","end":"2024-01-30T21:04:36.222812Z","steps":["trace[2129257327] 'agreement among raft nodes before linearized reading'  (duration: 198.599714ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:05:03.06988Z","caller":"traceutil/trace.go:171","msg":"trace[1760422914] transaction","detail":"{read_only:false; response_revision:1686; number_of_response:1; }","duration":"189.497035ms","start":"2024-01-30T21:05:02.880352Z","end":"2024-01-30T21:05:03.069849Z","steps":["trace[1760422914] 'process raft request'  (duration: 188.72141ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T21:05:40.580432Z","caller":"traceutil/trace.go:171","msg":"trace[1031456170] transaction","detail":"{read_only:false; response_revision:1892; number_of_response:1; }","duration":"271.330758ms","start":"2024-01-30T21:05:40.309084Z","end":"2024-01-30T21:05:40.580415Z","steps":["trace[1031456170] 'process raft request'  (duration: 271.167965ms)"],"step_count":1}
	
	
	==> gcp-auth [53a3e19bd9fcdf4c6de4781e80b4026ff6bc1ef3691f180665f396e910a0f59c] <==
	2024/01/30 21:03:12 GCP Auth Webhook started!
	2024/01/30 21:04:25 Ready to marshal response ...
	2024/01/30 21:04:25 Ready to write response ...
	2024/01/30 21:04:25 Ready to marshal response ...
	2024/01/30 21:04:25 Ready to write response ...
	2024/01/30 21:04:29 Ready to marshal response ...
	2024/01/30 21:04:29 Ready to write response ...
	2024/01/30 21:04:36 Ready to marshal response ...
	2024/01/30 21:04:36 Ready to write response ...
	2024/01/30 21:04:42 Ready to marshal response ...
	2024/01/30 21:04:42 Ready to write response ...
	2024/01/30 21:04:43 Ready to marshal response ...
	2024/01/30 21:04:43 Ready to write response ...
	2024/01/30 21:04:53 Ready to marshal response ...
	2024/01/30 21:04:53 Ready to write response ...
	2024/01/30 21:04:58 Ready to marshal response ...
	2024/01/30 21:04:58 Ready to write response ...
	2024/01/30 21:05:03 Ready to marshal response ...
	2024/01/30 21:05:03 Ready to write response ...
	2024/01/30 21:05:03 Ready to marshal response ...
	2024/01/30 21:05:03 Ready to write response ...
	2024/01/30 21:05:03 Ready to marshal response ...
	2024/01/30 21:05:03 Ready to write response ...
	2024/01/30 21:07:20 Ready to marshal response ...
	2024/01/30 21:07:20 Ready to write response ...
	
	
	==> kernel <==
	 21:07:31 up 6 min,  0 users,  load average: 0.96, 2.40, 1.41
	Linux addons-444608 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [cf9abc940f285c816ff2a44571995f1704be9599053f587f32c4f0c512675369] <==
	I0130 21:04:58.264772       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.96.16.172"}
	E0130 21:04:59.106620       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0130 21:05:03.176936       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.43.217"}
	I0130 21:05:11.693719       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.693891       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.705253       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.705356       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.718210       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.718355       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.782803       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.782887       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.789306       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.789387       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.818706       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.818842       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.869833       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.869937       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0130 21:05:11.877276       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0130 21:05:11.877359       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0130 21:05:12.789381       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0130 21:05:12.878072       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0130 21:05:12.898404       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0130 21:07:20.311325       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.235.124"}
	E0130 21:07:23.120556       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	E0130 21:07:26.054278       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [14ad245698646202d5fe5fc72f2b1a4284b6c078d51cf2742742ef086777a05f] <==
	E0130 21:05:58.098735       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:06:28.491030       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:06:28.491519       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:06:30.623270       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:06:30.623374       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:06:37.699769       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:06:37.699836       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:06:41.652899       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:06:41.653037       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:07:05.365684       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:07:05.365734       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0130 21:07:20.048177       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0130 21:07:20.106740       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-q5cdt"
	I0130 21:07:20.118101       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="69.980587ms"
	I0130 21:07:20.159162       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.771971ms"
	I0130 21:07:20.160172       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.978µs"
	I0130 21:07:22.995533       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0130 21:07:23.008154       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0130 21:07:23.012123       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-69cff4fd79" duration="4.079µs"
	I0130 21:07:23.968518       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.008383ms"
	I0130 21:07:23.968749       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="99.66µs"
	W0130 21:07:26.174889       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:07:26.174993       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0130 21:07:28.870287       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0130 21:07:28.870376       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [15a894455cabbcb1df9b0a9a45a13e46fee4795d0105a2c286947b6b405e97be] <==
	I0130 21:02:10.178004       1 server_others.go:69] "Using iptables proxy"
	I0130 21:02:10.281144       1 node.go:141] Successfully retrieved node IP: 192.168.39.85
	I0130 21:02:11.311617       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 21:02:11.311759       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 21:02:11.515958       1 server_others.go:152] "Using iptables Proxier"
	I0130 21:02:11.516030       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 21:02:11.516191       1 server.go:846] "Version info" version="v1.28.4"
	I0130 21:02:11.516230       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 21:02:11.627332       1 config.go:188] "Starting service config controller"
	I0130 21:02:11.627397       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 21:02:11.627429       1 config.go:97] "Starting endpoint slice config controller"
	I0130 21:02:11.627433       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 21:02:11.680543       1 config.go:315] "Starting node config controller"
	I0130 21:02:11.680650       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 21:02:11.973038       1 shared_informer.go:318] Caches are synced for node config
	I0130 21:02:11.973185       1 shared_informer.go:318] Caches are synced for service config
	I0130 21:02:12.129577       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [f231588d9af74fc3c19f1b0ecaf9366f422aba2c5594e37230172022b2229505] <==
	W0130 21:01:32.419084       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 21:01:32.419231       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 21:01:32.419548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 21:01:32.419118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 21:01:32.419793       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 21:01:32.419824       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 21:01:33.342943       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 21:01:33.342996       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 21:01:33.361661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 21:01:33.361734       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 21:01:33.443772       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 21:01:33.443873       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 21:01:33.477221       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 21:01:33.477364       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 21:01:33.482713       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 21:01:33.482736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 21:01:33.519356       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 21:01:33.519495       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0130 21:01:33.520052       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 21:01:33.520119       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0130 21:01:33.554244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 21:01:33.554294       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 21:01:33.867739       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 21:01:33.867830       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0130 21:01:36.684034       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 21:01:02 UTC, ends at Tue 2024-01-30 21:07:31 UTC. --
	Jan 30 21:07:20 addons-444608 kubelet[1252]: I0130 21:07:20.123724    1252 memory_manager.go:346] "RemoveStaleState removing state" podUID="25d15045-eb02-482b-8fef-4aa31e90d93d" containerName="csi-provisioner"
	Jan 30 21:07:20 addons-444608 kubelet[1252]: I0130 21:07:20.244748    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgtd6\" (UniqueName: \"kubernetes.io/projected/2a268081-26a4-493c-8ef5-949abe27802a-kube-api-access-kgtd6\") pod \"hello-world-app-5d77478584-q5cdt\" (UID: \"2a268081-26a4-493c-8ef5-949abe27802a\") " pod="default/hello-world-app-5d77478584-q5cdt"
	Jan 30 21:07:20 addons-444608 kubelet[1252]: I0130 21:07:20.244788    1252 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2a268081-26a4-493c-8ef5-949abe27802a-gcp-creds\") pod \"hello-world-app-5d77478584-q5cdt\" (UID: \"2a268081-26a4-493c-8ef5-949abe27802a\") " pod="default/hello-world-app-5d77478584-q5cdt"
	Jan 30 21:07:21 addons-444608 kubelet[1252]: I0130 21:07:21.654757    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlt9d\" (UniqueName: \"kubernetes.io/projected/1569e976-2932-44de-849e-d7c8d97d191f-kube-api-access-nlt9d\") pod \"1569e976-2932-44de-849e-d7c8d97d191f\" (UID: \"1569e976-2932-44de-849e-d7c8d97d191f\") "
	Jan 30 21:07:21 addons-444608 kubelet[1252]: I0130 21:07:21.657329    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1569e976-2932-44de-849e-d7c8d97d191f-kube-api-access-nlt9d" (OuterVolumeSpecName: "kube-api-access-nlt9d") pod "1569e976-2932-44de-849e-d7c8d97d191f" (UID: "1569e976-2932-44de-849e-d7c8d97d191f"). InnerVolumeSpecName "kube-api-access-nlt9d". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 21:07:21 addons-444608 kubelet[1252]: I0130 21:07:21.755883    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nlt9d\" (UniqueName: \"kubernetes.io/projected/1569e976-2932-44de-849e-d7c8d97d191f-kube-api-access-nlt9d\") on node \"addons-444608\" DevicePath \"\""
	Jan 30 21:07:21 addons-444608 kubelet[1252]: I0130 21:07:21.922615    1252 scope.go:117] "RemoveContainer" containerID="d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb"
	Jan 30 21:07:22 addons-444608 kubelet[1252]: I0130 21:07:22.364930    1252 scope.go:117] "RemoveContainer" containerID="d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb"
	Jan 30 21:07:22 addons-444608 kubelet[1252]: E0130 21:07:22.365997    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb\": container with ID starting with d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb not found: ID does not exist" containerID="d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb"
	Jan 30 21:07:22 addons-444608 kubelet[1252]: I0130 21:07:22.366073    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb"} err="failed to get container status \"d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb\": rpc error: code = NotFound desc = could not find container \"d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb\": container with ID starting with d14c268a9e1956fa2bb11ff5dee6e2bbbffb92f956fce5225a4b593d411e11eb not found: ID does not exist"
	Jan 30 21:07:23 addons-444608 kubelet[1252]: I0130 21:07:23.906019    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="0fd51cae-fa0b-4f07-a9cf-e7a2558466a7" path="/var/lib/kubelet/pods/0fd51cae-fa0b-4f07-a9cf-e7a2558466a7/volumes"
	Jan 30 21:07:23 addons-444608 kubelet[1252]: I0130 21:07:23.906598    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="1569e976-2932-44de-849e-d7c8d97d191f" path="/var/lib/kubelet/pods/1569e976-2932-44de-849e-d7c8d97d191f/volumes"
	Jan 30 21:07:23 addons-444608 kubelet[1252]: I0130 21:07:23.907009    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="4d1b2f03-a2a3-4e87-96c9-b5f7823baccb" path="/var/lib/kubelet/pods/4d1b2f03-a2a3-4e87-96c9-b5f7823baccb/volumes"
	Jan 30 21:07:23 addons-444608 kubelet[1252]: I0130 21:07:23.952603    1252 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-q5cdt" podStartSLOduration=2.467063885 podCreationTimestamp="2024-01-30 21:07:20 +0000 UTC" firstStartedPulling="2024-01-30 21:07:21.418097681 +0000 UTC m=+345.682174531" lastFinishedPulling="2024-01-30 21:07:22.903404251 +0000 UTC m=+347.167481100" observedRunningTime="2024-01-30 21:07:23.951536035 +0000 UTC m=+348.215612904" watchObservedRunningTime="2024-01-30 21:07:23.952370454 +0000 UTC m=+348.216447324"
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.391580    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-webhook-cert\") pod \"eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31\" (UID: \"eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31\") "
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.391634    1252 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-trtj4\" (UniqueName: \"kubernetes.io/projected/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-kube-api-access-trtj4\") pod \"eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31\" (UID: \"eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31\") "
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.394564    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31" (UID: "eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.397262    1252 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-kube-api-access-trtj4" (OuterVolumeSpecName: "kube-api-access-trtj4") pod "eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31" (UID: "eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31"). InnerVolumeSpecName "kube-api-access-trtj4". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.492943    1252 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-trtj4\" (UniqueName: \"kubernetes.io/projected/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-kube-api-access-trtj4\") on node \"addons-444608\" DevicePath \"\""
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.492979    1252 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31-webhook-cert\") on node \"addons-444608\" DevicePath \"\""
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.954999    1252 scope.go:117] "RemoveContainer" containerID="51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193"
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.991246    1252 scope.go:117] "RemoveContainer" containerID="51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193"
	Jan 30 21:07:26 addons-444608 kubelet[1252]: E0130 21:07:26.992080    1252 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193\": container with ID starting with 51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193 not found: ID does not exist" containerID="51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193"
	Jan 30 21:07:26 addons-444608 kubelet[1252]: I0130 21:07:26.992130    1252 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193"} err="failed to get container status \"51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193\": rpc error: code = NotFound desc = could not find container \"51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193\": container with ID starting with 51c02e8fe74b9e9094815848b97656d4d248c1122fabab364fcf479aaa351193 not found: ID does not exist"
	Jan 30 21:07:27 addons-444608 kubelet[1252]: I0130 21:07:27.905351    1252 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31" path="/var/lib/kubelet/pods/eb5bdcc4-15f2-4e17-ba04-64dc5e6fbf31/volumes"
	
	
	==> storage-provisioner [06f6e333eb7de13d95474e41bbec4f7102a35c42b72112d063d3556070db0df9] <==
	I0130 21:02:12.994129       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 21:02:42.999331       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [fd3664f23a39aa8238c3ccd1f8f813ff563e426560349073097647545feeb786] <==
	I0130 21:02:45.759400       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 21:02:45.781239       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 21:02:45.781540       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 21:02:45.797846       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 21:02:45.800182       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-444608_a04045e7-8e93-47e9-a85c-d8247a3f1e30!
	I0130 21:02:45.800043       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"ef3dc5ef-59bf-40de-a1e6-a3aa9b7a1b1b", APIVersion:"v1", ResourceVersion:"1020", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-444608_a04045e7-8e93-47e9-a85c-d8247a3f1e30 became leader
	I0130 21:02:45.903914       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-444608_a04045e7-8e93-47e9-a85c-d8247a3f1e30!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-444608 -n addons-444608
helpers_test.go:261: (dbg) Run:  kubectl --context addons-444608 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.52s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.2s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-444608
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-444608: exit status 82 (2m0.283818499s)

                                                
                                                
-- stdout --
	* Stopping node "addons-444608"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-444608" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444608
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-444608: exit status 11 (21.631382904s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-444608" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444608
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-444608: exit status 11 (6.143356742s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-444608" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-444608
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-444608: exit status 11 (6.144152053s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.85:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-444608" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-500919 /tmp/TestFunctionalserialCacheCmdcacheadd_local142848583/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache add minikube-local-cache-test:functional-500919
functional_test.go:1085: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 cache add minikube-local-cache-test:functional-500919: exit status 10 (426.072713ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! "minikube cache" will be deprecated in upcoming versions, please switch to "minikube image load"
	X Exiting due to MK_CACHE_LOAD: Failed to cache and load images: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/minikube-local-cache-test_functional-500919": write: unable to calculate manifest: blob sha256:bd99dd27a9d94cd754051057c185a28fd8a0217ecd02ff3ee64553c4ba3a94ae not found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_0d071fa2cb673630fd44fda7009ed75495776861_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1087: failed to 'cache add' local image "minikube-local-cache-test:functional-500919". args "out/minikube-linux-amd64 -p functional-500919 cache add minikube-local-cache-test:functional-500919" err exit status 10
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache delete minikube-local-cache-test:functional-500919
functional_test.go:1090: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 cache delete minikube-local-cache-test:functional-500919: exit status 30 (78.72732ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to HOST_DEL_CACHE: Failed to delete images: remove /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/minikube-local-cache-test_functional-500919: no such file or directory
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_cache_2de9a62286edda32cd9c7e6f01df908b42a29ee3_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1092: failed to 'cache delete' local image "minikube-local-cache-test:functional-500919". args "out/minikube-linux-amd64 -p functional-500919 cache delete minikube-local-cache-test:functional-500919" err exit status 30
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-500919
--- FAIL: TestFunctional/serial/CacheCmd/cache/add_local (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr
functional_test.go:354: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr: exit status 80 (919.789677ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:33.121491  654168 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:33.121769  654168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:33.121782  654168 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:33.121789  654168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:33.122090  654168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:33.122794  654168 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:33.122885  654168 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:33.123124  654168 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:33.125461  654168 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc00016e070 tarballImage:<nil> computed:false id:0xc00093e0e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:33.125554  654168 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:33.951415  654168 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 828.547644ms
	I0130 21:14:33.953574  654168 out.go:177] 
	W0130 21:14:33.955461  654168 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 21:14:33.955493  654168 out.go:239] * 
	* 
	W0130 21:14:33.958792  654168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:33.960246  654168 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:356: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:33.121491  654168 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:33.121769  654168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:33.121782  654168 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:33.121789  654168 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:33.122090  654168 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:33.122794  654168 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:33.122885  654168 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:33.123124  654168 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:33.125461  654168 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc00016e070 tarballImage:<nil> computed:false id:0xc00093e0e0 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:33.125554  654168 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:33.951415  654168 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 828.547644ms
	I0130 21:14:33.953574  654168 out.go:177] 
	W0130 21:14:33.955461  654168 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 21:14:33.955493  654168 out.go:239] * 
	* 
	W0130 21:14:33.958792  654168 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:33.960246  654168 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr
functional_test.go:364: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr: exit status 80 (782.288827ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:34.041670  654363 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:34.042013  654363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:34.042028  654363 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:34.042036  654363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:34.042349  654363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:34.043142  654363 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:34.043220  654363 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:34.043354  654363 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:34.045293  654363 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc00016ecb0 tarballImage:<nil> computed:false id:0xc00083a040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:34.045325  654363 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:34.731426  654363 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 688.21922ms
	I0130 21:14:34.734086  654363 out.go:177] 
	W0130 21:14:34.735585  654363 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 21:14:34.735609  654363 out.go:239] * 
	* 
	W0130 21:14:34.739926  654363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:34.741732  654363 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:366: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:34.041670  654363 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:34.042013  654363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:34.042028  654363 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:34.042036  654363 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:34.042349  654363 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:34.043142  654363 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:34.043220  654363 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:34.043354  654363 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:34.045293  654363 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc00016ecb0 tarballImage:<nil> computed:false id:0xc00083a040 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:34.045325  654363 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:34.731426  654363 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 688.21922ms
	I0130 21:14:34.734086  654363 out.go:177] 
	W0130 21:14:34.735585  654363 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:df65ec24e31e9052f40143a6c297f81013842ab5813fa9c8d8da20a43938ad9e not found
	W0130 21:14:34.735609  654363 out.go:239] * 
	* 
	W0130 21:14:34.739926  654363 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:34.741732  654363 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-500919
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr
functional_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 image load --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr: exit status 80 (738.494307ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:35.692717  654637 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:35.692887  654637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:35.692899  654637 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:35.692904  654637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:35.693160  654637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:35.693881  654637 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:35.693947  654637 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:35.694052  654637 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:35.695745  654637 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc0007ca070 tarballImage:<nil> computed:false id:0xc0000ae080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:35.695771  654637 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:36.352201  654637 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 658.256343ms
	I0130 21:14:36.354894  654637 out.go:177] 
	W0130 21:14:36.356599  654637 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 21:14:36.356622  654637 out.go:239] * 
	* 
	W0130 21:14:36.359549  654637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:36.361307  654637 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:246: loading image into minikube from daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:35.692717  654637 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:35.692887  654637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:35.692899  654637 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:35.692904  654637 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:35.693160  654637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:35.693881  654637 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:35.693947  654637 cache.go:107] acquiring lock: {Name:mkb0a0c566d562a6913b6523f352efb457888e5e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:14:35.694052  654637 image.go:134] retrieving image: gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:35.695745  654637 image.go:173] found gcr.io/google-containers/addon-resizer:functional-500919 locally: &{ref:{Repository:{Registry:{insecure:false registry:gcr.io} repository:google-containers/addon-resizer} tag:functional-500919 original:gcr.io/google-containers/addon-resizer:functional-500919} opener:0xc0007ca070 tarballImage:<nil> computed:false id:0xc0000ae080 configFile:<nil> once:{done:0 m:{state:0 sema:0}} err:<nil>}
	I0130 21:14:35.695771  654637 cache.go:162] opening:  /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919
	I0130 21:14:36.352201  654637 cache.go:96] cache image "gcr.io/google-containers/addon-resizer:functional-500919" -> "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919" took 658.256343ms
	I0130 21:14:36.354894  654637 out.go:177] 
	W0130 21:14:36.356599  654637 out.go:239] X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	X Exiting due to GUEST_IMAGE_LOAD: Failed to load image: save to dir: caching images: caching image "/home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919": write: unable to calculate manifest: blob sha256:f3896f083e92c804887811c3ec1e7c7e38dd72e96aec843c52a5af3fd81d0e6a not found
	W0130 21:14:36.356622  654637 out.go:239] * 
	* 
	W0130 21:14:36.359549  654637 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_ac2ace73ac40020c4171aa9c312290b59eecf530_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:36.361307  654637 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image save gcr.io/google-containers/addon-resizer:functional-500919 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:385: expected "/home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:410: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I0130 21:14:37.667241  654718 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:37.667577  654718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.667589  654718 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:37.667594  654718 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.667818  654718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:37.668574  654718 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.668724  654718 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.669178  654718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.669236  654718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.684694  654718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43879
	I0130 21:14:37.685185  654718 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.685892  654718 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.685921  654718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.686271  654718 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.686482  654718 main.go:141] libmachine: (functional-500919) Calling .GetState
	I0130 21:14:37.688404  654718 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.688448  654718 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.703466  654718 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37663
	I0130 21:14:37.703912  654718 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.704412  654718 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.704474  654718 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.704893  654718 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.705097  654718 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:14:37.705405  654718 ssh_runner.go:195] Run: systemctl --version
	I0130 21:14:37.705438  654718 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
	I0130 21:14:37.708777  654718 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.709216  654718 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
	I0130 21:14:37.709258  654718 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.709392  654718 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
	I0130 21:14:37.709601  654718 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
	I0130 21:14:37.709800  654718 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
	I0130 21:14:37.710035  654718 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
	I0130 21:14:37.842124  654718 cache_images.go:286] Loading image from: /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar
	W0130 21:14:37.842210  654718 cache_images.go:254] Failed to load cached images for profile functional-500919. make sure the profile is running. loading images: stat /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar: no such file or directory
	I0130 21:14:37.842251  654718 cache_images.go:262] succeeded pushing to: 
	I0130 21:14:37.842256  654718 cache_images.go:263] failed pushing to: functional-500919
	I0130 21:14:37.842280  654718 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:37.842294  654718 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:37.842619  654718 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
	I0130 21:14:37.842669  654718 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:37.842683  654718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:37.842700  654718 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:37.842712  654718 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:37.842939  654718 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:37.842956  654718 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:37.842983  654718 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-500919
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image save --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr
functional_test.go:423: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 image save --daemon gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr: exit status 80 (387.98727ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:37.943977  654750 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:37.944150  654750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.944166  654750 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:37.944173  654750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.944527  654750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:37.945456  654750 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.945523  654750 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-500919"]
	I0130 21:14:37.945672  654750 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.946250  654750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.946309  654750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.961521  654750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0130 21:14:37.962066  654750 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.962747  654750 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.962782  654750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.963228  654750 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.963439  654750 main.go:141] libmachine: (functional-500919) Calling .GetState
	I0130 21:14:37.965697  654750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.965741  654750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.981265  654750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0130 21:14:37.981721  654750 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.982207  654750 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.982234  654750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.982549  654750 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.982729  654750 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:14:37.982876  654750 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-500919]
	I0130 21:14:37.983010  654750 ssh_runner.go:195] Run: systemctl --version
	I0130 21:14:37.983043  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
	I0130 21:14:37.986153  654750 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.986601  654750 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
	I0130 21:14:37.986635  654750 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.986754  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
	I0130 21:14:37.986905  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
	I0130 21:14:37.987067  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
	I0130 21:14:37.987220  654750 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
	I0130 21:14:38.115143  654750 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:38.245389  654750 cache_images.go:345] SaveImages completed in 262.47304ms
	W0130 21:14:38.245426  654750 cache_images.go:442] Failed to load cached images for profile functional-500919. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-500919 not found
	I0130 21:14:38.245441  654750 cache_images.go:450] succeeded pulling from : 
	I0130 21:14:38.245447  654750 cache_images.go:451] failed pulling from : functional-500919
	I0130 21:14:38.245493  654750 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:38.245504  654750 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:38.245851  654750 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:38.245868  654750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:38.245877  654750 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:38.245885  654750 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:38.246158  654750 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:38.246175  654750 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
	I0130 21:14:38.246181  654750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:38.248589  654750 out.go:177] 
	W0130 21:14:38.249931  654750 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919: no such file or directory
	W0130 21:14:38.249957  654750 out.go:239] * 
	* 
	W0130 21:14:38.254313  654750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:38.255762  654750 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:425: saving image from minikube to daemon: exit status 80

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:37.943977  654750 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:37.944150  654750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.944166  654750 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:37.944173  654750 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:37.944527  654750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:37.945456  654750 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.945523  654750 cache_images.go:396] Save images: ["gcr.io/google-containers/addon-resizer:functional-500919"]
	I0130 21:14:37.945672  654750 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:37.946250  654750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.946309  654750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.961521  654750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0130 21:14:37.962066  654750 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.962747  654750 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.962782  654750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.963228  654750 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.963439  654750 main.go:141] libmachine: (functional-500919) Calling .GetState
	I0130 21:14:37.965697  654750 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:37.965741  654750 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:37.981265  654750 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41943
	I0130 21:14:37.981721  654750 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:37.982207  654750 main.go:141] libmachine: Using API Version  1
	I0130 21:14:37.982234  654750 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:37.982549  654750 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:37.982729  654750 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:14:37.982876  654750 cache_images.go:341] SaveImages start: [gcr.io/google-containers/addon-resizer:functional-500919]
	I0130 21:14:37.983010  654750 ssh_runner.go:195] Run: systemctl --version
	I0130 21:14:37.983043  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
	I0130 21:14:37.986153  654750 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.986601  654750 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
	I0130 21:14:37.986635  654750 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
	I0130 21:14:37.986754  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
	I0130 21:14:37.986905  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
	I0130 21:14:37.987067  654750 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
	I0130 21:14:37.987220  654750 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
	I0130 21:14:38.115143  654750 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/google-containers/addon-resizer:functional-500919
	I0130 21:14:38.245389  654750 cache_images.go:345] SaveImages completed in 262.47304ms
	W0130 21:14:38.245426  654750 cache_images.go:442] Failed to load cached images for profile functional-500919. make sure the profile is running. saving cached images: image gcr.io/google-containers/addon-resizer:functional-500919 not found
	I0130 21:14:38.245441  654750 cache_images.go:450] succeeded pulling from : 
	I0130 21:14:38.245447  654750 cache_images.go:451] failed pulling from : functional-500919
	I0130 21:14:38.245493  654750 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:38.245504  654750 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:38.245851  654750 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:38.245868  654750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:38.245877  654750 main.go:141] libmachine: Making call to close driver server
	I0130 21:14:38.245885  654750 main.go:141] libmachine: (functional-500919) Calling .Close
	I0130 21:14:38.246158  654750 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:14:38.246175  654750 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
	I0130 21:14:38.246181  654750 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:14:38.248589  654750 out.go:177] 
	W0130 21:14:38.249931  654750 out.go:239] X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919: no such file or directory
	X Exiting due to GUEST_IMAGE_SAVE: Failed to save image: tarball: open /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/google-containers/addon-resizer_functional-500919: no such file or directory
	W0130 21:14:38.249957  654750 out.go:239] * 
	* 
	W0130 21:14:38.254313  654750 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_image_37523167baaa49a1ccfba2570a6a430d146b8afb_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 21:14:38.255762  654750 out.go:177] 

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (181.18s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:207: (dbg) Run:  kubectl --context ingress-addon-legacy-298651 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
E0130 21:17:09.003477  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
addons_test.go:207: (dbg) Done: kubectl --context ingress-addon-legacy-298651 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (18.210008628s)
addons_test.go:232: (dbg) Run:  kubectl --context ingress-addon-legacy-298651 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context ingress-addon-legacy-298651 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [fb2f507d-94fb-4d2e-85be-c538720fdd66] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [fb2f507d-94fb-4d2e-85be-c538720fdd66] Running
addons_test.go:250: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.004422012s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0130 21:19:25.158785  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:19:32.717298  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:32.722678  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:32.732967  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:32.753263  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:32.793604  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:32.873989  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:33.034441  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:33.355040  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:33.995975  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-298651 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.196360279s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context ingress-addon-legacy-298651 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.33
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons disable ingress-dns --alsologtostderr -v=1
E0130 21:19:35.276494  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:37.837202  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:19:42.957787  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons disable ingress-dns --alsologtostderr -v=1: (9.118864608s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons disable ingress --alsologtostderr -v=1: (7.596541182s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-298651 -n ingress-addon-legacy-298651
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 logs -n 25
E0130 21:19:52.843909  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:19:53.198522  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-298651 logs -n 25: (1.159196184s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| Command |                                   Args                                   |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| ssh     | functional-500919 ssh sudo                                               | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| image   | functional-500919 image ls                                               | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	| image   | functional-500919                                                        | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | image ls --format json                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| image   | functional-500919                                                        | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | image ls --format table                                                  |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	| mount   | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdspecific-port3817873563/001:/mount-9p |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1 --port 46464                                      |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | -T /mount-9p | grep 9p                                                   |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh -- ls                                              | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | -la /mount-9p                                                            |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh sudo                                               | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | umount -f /mount-9p                                                      |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount3   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount1   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| mount   | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount2   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | -T /mount1                                                               |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | -T /mount2                                                               |                             |         |         |                     |                     |
	| ssh     | functional-500919 ssh findmnt                                            | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	|         | -T /mount3                                                               |                             |         |         |                     |                     |
	| mount   | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC |                     |
	|         | --kill=true                                                              |                             |         |         |                     |                     |
	| delete  | -p functional-500919                                                     | functional-500919           | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:15 UTC |
	| start   | -p ingress-addon-legacy-298651                                           | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:15 UTC | 30 Jan 24 21:16 UTC |
	|         | --kubernetes-version=v1.18.20                                            |                             |         |         |                     |                     |
	|         | --memory=4096 --wait=true                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr                                                        |                             |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                                                       |                             |         |         |                     |                     |
	|         | --container-runtime=crio                                                 |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-298651                                              | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:16 UTC | 30 Jan 24 21:16 UTC |
	|         | addons enable ingress                                                    |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-298651                                              | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:16 UTC | 30 Jan 24 21:16 UTC |
	|         | addons enable ingress-dns                                                |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=5                                                   |                             |         |         |                     |                     |
	| ssh     | ingress-addon-legacy-298651                                              | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:17 UTC |                     |
	|         | ssh curl -s http://127.0.0.1/                                            |                             |         |         |                     |                     |
	|         | -H 'Host: nginx.example.com'                                             |                             |         |         |                     |                     |
	| ip      | ingress-addon-legacy-298651 ip                                           | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:19 UTC | 30 Jan 24 21:19 UTC |
	| addons  | ingress-addon-legacy-298651                                              | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:19 UTC | 30 Jan 24 21:19 UTC |
	|         | addons disable ingress-dns                                               |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	| addons  | ingress-addon-legacy-298651                                              | ingress-addon-legacy-298651 | jenkins | v1.32.0 | 30 Jan 24 21:19 UTC | 30 Jan 24 21:19 UTC |
	|         | addons disable ingress                                                   |                             |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                   |                             |         |         |                     |                     |
	|---------|--------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:15:23
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:15:23.399615  656435 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:15:23.399783  656435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:15:23.399792  656435 out.go:309] Setting ErrFile to fd 2...
	I0130 21:15:23.399796  656435 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:15:23.399982  656435 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:15:23.400581  656435 out.go:303] Setting JSON to false
	I0130 21:15:23.401569  656435 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7076,"bootTime":1706642248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:15:23.401634  656435 start.go:138] virtualization: kvm guest
	I0130 21:15:23.404282  656435 out.go:177] * [ingress-addon-legacy-298651] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:15:23.405721  656435 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 21:15:23.407013  656435 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:15:23.405819  656435 notify.go:220] Checking for updates...
	I0130 21:15:23.409531  656435 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:15:23.411022  656435 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:15:23.412327  656435 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 21:15:23.413613  656435 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 21:15:23.415173  656435 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:15:23.451403  656435 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 21:15:23.452670  656435 start.go:298] selected driver: kvm2
	I0130 21:15:23.452681  656435 start.go:902] validating driver "kvm2" against <nil>
	I0130 21:15:23.452691  656435 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 21:15:23.453408  656435 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:15:23.453511  656435 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 21:15:23.469164  656435 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 21:15:23.469231  656435 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 21:15:23.469511  656435 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 21:15:23.469590  656435 cni.go:84] Creating CNI manager for ""
	I0130 21:15:23.469603  656435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:15:23.469614  656435 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 21:15:23.469625  656435 start_flags.go:321] config:
	{Name:ingress-addon-legacy-298651 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-298651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:15:23.469775  656435 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:15:23.472036  656435 out.go:177] * Starting control plane node ingress-addon-legacy-298651 in cluster ingress-addon-legacy-298651
	I0130 21:15:23.473642  656435 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 21:15:23.502737  656435 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0130 21:15:23.502791  656435 cache.go:56] Caching tarball of preloaded images
	I0130 21:15:23.502970  656435 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 21:15:23.505216  656435 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0130 21:15:23.506610  656435 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:15:23.539923  656435 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0130 21:15:26.682222  656435 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:15:26.682324  656435 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:15:27.687474  656435 cache.go:59] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0130 21:15:27.687868  656435 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/config.json ...
	I0130 21:15:27.687903  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/config.json: {Name:mk4d0b6e5dbb12b2791632c3c29e435faefc51e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:27.688126  656435 start.go:365] acquiring machines lock for ingress-addon-legacy-298651: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:15:27.688167  656435 start.go:369] acquired machines lock for "ingress-addon-legacy-298651" in 20.414µs
	I0130 21:15:27.688182  656435 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-298651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-298651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:15:27.688266  656435 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 21:15:27.691673  656435 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0130 21:15:27.691848  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:15:27.691905  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:15:27.706378  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33361
	I0130 21:15:27.706841  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:15:27.707421  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:15:27.707449  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:15:27.707863  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:15:27.708114  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetMachineName
	I0130 21:15:27.708263  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:27.708457  656435 start.go:159] libmachine.API.Create for "ingress-addon-legacy-298651" (driver="kvm2")
	I0130 21:15:27.708496  656435 client.go:168] LocalClient.Create starting
	I0130 21:15:27.708578  656435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem
	I0130 21:15:27.708627  656435 main.go:141] libmachine: Decoding PEM data...
	I0130 21:15:27.708652  656435 main.go:141] libmachine: Parsing certificate...
	I0130 21:15:27.708725  656435 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem
	I0130 21:15:27.708755  656435 main.go:141] libmachine: Decoding PEM data...
	I0130 21:15:27.708784  656435 main.go:141] libmachine: Parsing certificate...
	I0130 21:15:27.708818  656435 main.go:141] libmachine: Running pre-create checks...
	I0130 21:15:27.708836  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .PreCreateCheck
	I0130 21:15:27.709169  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetConfigRaw
	I0130 21:15:27.709638  656435 main.go:141] libmachine: Creating machine...
	I0130 21:15:27.709655  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Create
	I0130 21:15:27.709805  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Creating KVM machine...
	I0130 21:15:27.710989  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found existing default KVM network
	I0130 21:15:27.711693  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:27.711548  656469 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015a20}
	I0130 21:15:27.717514  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | trying to create private KVM network mk-ingress-addon-legacy-298651 192.168.39.0/24...
	I0130 21:15:27.788897  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting up store path in /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651 ...
	I0130 21:15:27.788938  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | private KVM network mk-ingress-addon-legacy-298651 192.168.39.0/24 created
	I0130 21:15:27.788952  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Building disk image from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 21:15:27.788968  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:27.788817  656469 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:15:27.788986  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Downloading /home/jenkins/minikube-integration/18014-640473/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 21:15:28.032386  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:28.032201  656469 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa...
	I0130 21:15:28.297096  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:28.296914  656469 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/ingress-addon-legacy-298651.rawdisk...
	I0130 21:15:28.297127  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Writing magic tar header
	I0130 21:15:28.297163  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Writing SSH key tar header
	I0130 21:15:28.297172  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:28.297053  656469 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651 ...
	I0130 21:15:28.297186  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651
	I0130 21:15:28.297203  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651 (perms=drwx------)
	I0130 21:15:28.297286  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines (perms=drwxr-xr-x)
	I0130 21:15:28.297312  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube (perms=drwxr-xr-x)
	I0130 21:15:28.297326  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines
	I0130 21:15:28.297341  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:15:28.297355  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473
	I0130 21:15:28.297373  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 21:15:28.297385  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home/jenkins
	I0130 21:15:28.297397  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473 (perms=drwxrwxr-x)
	I0130 21:15:28.297412  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 21:15:28.297424  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 21:15:28.297441  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Creating domain...
	I0130 21:15:28.297453  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Checking permissions on dir: /home
	I0130 21:15:28.297483  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Skipping /home - not owner
	I0130 21:15:28.298392  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) define libvirt domain using xml: 
	I0130 21:15:28.298407  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) <domain type='kvm'>
	I0130 21:15:28.298437  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <name>ingress-addon-legacy-298651</name>
	I0130 21:15:28.298457  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <memory unit='MiB'>4096</memory>
	I0130 21:15:28.298469  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <vcpu>2</vcpu>
	I0130 21:15:28.298477  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <features>
	I0130 21:15:28.298483  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <acpi/>
	I0130 21:15:28.298489  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <apic/>
	I0130 21:15:28.298495  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <pae/>
	I0130 21:15:28.298501  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     
	I0130 21:15:28.298507  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   </features>
	I0130 21:15:28.298515  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <cpu mode='host-passthrough'>
	I0130 21:15:28.298522  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   
	I0130 21:15:28.298527  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   </cpu>
	I0130 21:15:28.298533  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <os>
	I0130 21:15:28.298546  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <type>hvm</type>
	I0130 21:15:28.298555  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <boot dev='cdrom'/>
	I0130 21:15:28.298561  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <boot dev='hd'/>
	I0130 21:15:28.298568  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <bootmenu enable='no'/>
	I0130 21:15:28.298573  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   </os>
	I0130 21:15:28.298580  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   <devices>
	I0130 21:15:28.298585  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <disk type='file' device='cdrom'>
	I0130 21:15:28.298596  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/boot2docker.iso'/>
	I0130 21:15:28.298603  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <target dev='hdc' bus='scsi'/>
	I0130 21:15:28.298609  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <readonly/>
	I0130 21:15:28.298614  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </disk>
	I0130 21:15:28.298641  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <disk type='file' device='disk'>
	I0130 21:15:28.298666  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 21:15:28.298681  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/ingress-addon-legacy-298651.rawdisk'/>
	I0130 21:15:28.298692  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <target dev='hda' bus='virtio'/>
	I0130 21:15:28.298703  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </disk>
	I0130 21:15:28.298718  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <interface type='network'>
	I0130 21:15:28.298734  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <source network='mk-ingress-addon-legacy-298651'/>
	I0130 21:15:28.298760  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <model type='virtio'/>
	I0130 21:15:28.298771  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </interface>
	I0130 21:15:28.298782  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <interface type='network'>
	I0130 21:15:28.298798  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <source network='default'/>
	I0130 21:15:28.298810  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <model type='virtio'/>
	I0130 21:15:28.298832  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </interface>
	I0130 21:15:28.298870  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <serial type='pty'>
	I0130 21:15:28.298885  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <target port='0'/>
	I0130 21:15:28.298891  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </serial>
	I0130 21:15:28.298897  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <console type='pty'>
	I0130 21:15:28.298905  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <target type='serial' port='0'/>
	I0130 21:15:28.298911  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </console>
	I0130 21:15:28.298921  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     <rng model='virtio'>
	I0130 21:15:28.298933  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)       <backend model='random'>/dev/random</backend>
	I0130 21:15:28.298950  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     </rng>
	I0130 21:15:28.298963  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     
	I0130 21:15:28.298971  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)     
	I0130 21:15:28.298981  656435 main.go:141] libmachine: (ingress-addon-legacy-298651)   </devices>
	I0130 21:15:28.298992  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) </domain>
	I0130 21:15:28.299004  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) 
	I0130 21:15:28.303459  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:08:b9:81 in network default
	I0130 21:15:28.304037  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Ensuring networks are active...
	I0130 21:15:28.304062  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:28.304772  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Ensuring network default is active
	I0130 21:15:28.305105  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Ensuring network mk-ingress-addon-legacy-298651 is active
	I0130 21:15:28.305711  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Getting domain xml...
	I0130 21:15:28.306438  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Creating domain...
	I0130 21:15:29.507924  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Waiting to get IP...
	I0130 21:15:29.508878  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:29.509215  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:29.509236  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:29.509179  656469 retry.go:31] will retry after 296.048035ms: waiting for machine to come up
	I0130 21:15:29.806630  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:29.807207  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:29.807243  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:29.807114  656469 retry.go:31] will retry after 270.440734ms: waiting for machine to come up
	I0130 21:15:30.079654  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.080069  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.080099  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:30.080018  656469 retry.go:31] will retry after 312.619395ms: waiting for machine to come up
	I0130 21:15:30.394629  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.395052  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.395078  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:30.395006  656469 retry.go:31] will retry after 444.1119ms: waiting for machine to come up
	I0130 21:15:30.840745  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.841268  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:30.841310  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:30.841225  656469 retry.go:31] will retry after 724.704176ms: waiting for machine to come up
	I0130 21:15:31.567670  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:31.568007  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:31.568038  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:31.567971  656469 retry.go:31] will retry after 860.593542ms: waiting for machine to come up
	I0130 21:15:32.430275  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:32.430662  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:32.430688  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:32.430603  656469 retry.go:31] will retry after 1.11806823s: waiting for machine to come up
	I0130 21:15:33.550314  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:33.550736  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:33.550754  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:33.550706  656469 retry.go:31] will retry after 1.059729305s: waiting for machine to come up
	I0130 21:15:34.611946  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:34.612306  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:34.612334  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:34.612233  656469 retry.go:31] will retry after 1.401079297s: waiting for machine to come up
	I0130 21:15:36.014618  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:36.014987  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:36.015026  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:36.014931  656469 retry.go:31] will retry after 1.41802462s: waiting for machine to come up
	I0130 21:15:37.434346  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:37.434807  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:37.434842  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:37.434753  656469 retry.go:31] will retry after 2.030932383s: waiting for machine to come up
	I0130 21:15:39.468685  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:39.469083  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:39.469111  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:39.469053  656469 retry.go:31] will retry after 2.50412545s: waiting for machine to come up
	I0130 21:15:41.974771  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:41.975254  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:41.975287  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:41.975187  656469 retry.go:31] will retry after 2.852030898s: waiting for machine to come up
	I0130 21:15:44.828500  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:44.828818  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find current IP address of domain ingress-addon-legacy-298651 in network mk-ingress-addon-legacy-298651
	I0130 21:15:44.828849  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | I0130 21:15:44.828782  656469 retry.go:31] will retry after 4.661633585s: waiting for machine to come up
	I0130 21:15:49.492800  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.493219  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Found IP for machine: 192.168.39.33
	I0130 21:15:49.493255  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has current primary IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.493271  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Reserving static IP address...
	I0130 21:15:49.493623  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-298651", mac: "52:54:00:40:50:61", ip: "192.168.39.33"} in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.566566  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Reserved static IP address: 192.168.39.33
	I0130 21:15:49.566611  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Waiting for SSH to be available...
	I0130 21:15:49.566621  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Getting to WaitForSSH function...
	I0130 21:15:49.569242  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.569724  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:minikube Clientid:01:52:54:00:40:50:61}
	I0130 21:15:49.569760  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.569837  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Using SSH client type: external
	I0130 21:15:49.569864  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa (-rw-------)
	I0130 21:15:49.569913  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.33 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 21:15:49.569937  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | About to run SSH command:
	I0130 21:15:49.569959  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | exit 0
	I0130 21:15:49.653364  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | SSH cmd err, output: <nil>: 
	I0130 21:15:49.653654  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) KVM machine creation complete!
	I0130 21:15:49.653951  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetConfigRaw
	I0130 21:15:49.654503  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:49.654712  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:49.654882  656435 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 21:15:49.654898  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetState
	I0130 21:15:49.656176  656435 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 21:15:49.656192  656435 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 21:15:49.656198  656435 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 21:15:49.656205  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:49.658567  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.658906  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:49.658930  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.659108  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:49.659317  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.659464  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.659605  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:49.659785  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:49.660175  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:49.660193  656435 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 21:15:49.768827  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:15:49.768866  656435 main.go:141] libmachine: Detecting the provisioner...
	I0130 21:15:49.768882  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:49.771796  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.772195  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:49.772226  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.772376  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:49.772590  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.772733  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.772907  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:49.773137  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:49.773492  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:49.773519  656435 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 21:15:49.882138  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 21:15:49.882253  656435 main.go:141] libmachine: found compatible host: buildroot
	I0130 21:15:49.882270  656435 main.go:141] libmachine: Provisioning with buildroot...
	I0130 21:15:49.882279  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetMachineName
	I0130 21:15:49.882569  656435 buildroot.go:166] provisioning hostname "ingress-addon-legacy-298651"
	I0130 21:15:49.882605  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetMachineName
	I0130 21:15:49.882798  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:49.886187  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.886602  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:49.886635  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:49.886853  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:49.887057  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.887244  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:49.887411  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:49.887571  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:49.887914  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:49.887935  656435 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-298651 && echo "ingress-addon-legacy-298651" | sudo tee /etc/hostname
	I0130 21:15:50.006206  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-298651
	
	I0130 21:15:50.006244  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.009218  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.009535  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.009572  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.009784  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.010013  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.010196  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.010359  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.010568  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:50.010943  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:50.010965  656435 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-298651' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-298651/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-298651' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 21:15:50.125918  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:15:50.125949  656435 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 21:15:50.125993  656435 buildroot.go:174] setting up certificates
	I0130 21:15:50.126005  656435 provision.go:83] configureAuth start
	I0130 21:15:50.126017  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetMachineName
	I0130 21:15:50.126325  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetIP
	I0130 21:15:50.129086  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.129481  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.129522  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.129702  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.131954  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.132315  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.132351  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.132500  656435 provision.go:138] copyHostCerts
	I0130 21:15:50.132530  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:15:50.132568  656435 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 21:15:50.132577  656435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:15:50.132641  656435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 21:15:50.132714  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:15:50.132742  656435 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 21:15:50.132749  656435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:15:50.132772  656435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 21:15:50.132812  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:15:50.132827  656435 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 21:15:50.132833  656435 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:15:50.132853  656435 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 21:15:50.132896  656435 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-298651 san=[192.168.39.33 192.168.39.33 localhost 127.0.0.1 minikube ingress-addon-legacy-298651]
	I0130 21:15:50.215889  656435 provision.go:172] copyRemoteCerts
	I0130 21:15:50.215952  656435 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 21:15:50.215981  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.218718  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.219015  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.219102  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.219196  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.219417  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.219599  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.219726  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:15:50.302478  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 21:15:50.302578  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 21:15:50.325450  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 21:15:50.325545  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 21:15:50.348496  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 21:15:50.348625  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 21:15:50.371028  656435 provision.go:86] duration metric: configureAuth took 245.004375ms
	I0130 21:15:50.371060  656435 buildroot.go:189] setting minikube options for container-runtime
	I0130 21:15:50.371243  656435 config.go:182] Loaded profile config "ingress-addon-legacy-298651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0130 21:15:50.371325  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.373816  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.374097  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.374131  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.374385  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.374572  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.374746  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.374855  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.375015  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:50.375485  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:50.375509  656435 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 21:15:50.679814  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 21:15:50.679853  656435 main.go:141] libmachine: Checking connection to Docker...
	I0130 21:15:50.679866  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetURL
	I0130 21:15:50.681287  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Using libvirt version 6000000
	I0130 21:15:50.683743  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.684147  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.684188  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.684377  656435 main.go:141] libmachine: Docker is up and running!
	I0130 21:15:50.684406  656435 main.go:141] libmachine: Reticulating splines...
	I0130 21:15:50.684416  656435 client.go:171] LocalClient.Create took 22.975907278s
	I0130 21:15:50.684446  656435 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-298651" took 22.975989613s
	I0130 21:15:50.684474  656435 start.go:300] post-start starting for "ingress-addon-legacy-298651" (driver="kvm2")
	I0130 21:15:50.684493  656435 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 21:15:50.684527  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:50.684788  656435 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 21:15:50.684825  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.687063  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.687398  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.687432  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.687566  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.687759  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.687983  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.688167  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:15:50.770723  656435 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 21:15:50.774840  656435 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 21:15:50.774873  656435 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 21:15:50.774955  656435 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 21:15:50.775074  656435 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 21:15:50.775090  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /etc/ssl/certs/6477182.pem
	I0130 21:15:50.775181  656435 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 21:15:50.783099  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:15:50.805223  656435 start.go:303] post-start completed in 120.732245ms
	I0130 21:15:50.805284  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetConfigRaw
	I0130 21:15:50.805935  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetIP
	I0130 21:15:50.808860  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.809198  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.809239  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.809571  656435 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/config.json ...
	I0130 21:15:50.809800  656435 start.go:128] duration metric: createHost completed in 23.121521375s
	I0130 21:15:50.809832  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.812259  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.812606  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.812636  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.812812  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.813062  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.813242  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.813431  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.813607  656435 main.go:141] libmachine: Using SSH client type: native
	I0130 21:15:50.813979  656435 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.33 22 <nil> <nil>}
	I0130 21:15:50.813997  656435 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 21:15:50.922258  656435 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706649350.899487672
	
	I0130 21:15:50.922292  656435 fix.go:206] guest clock: 1706649350.899487672
	I0130 21:15:50.922303  656435 fix.go:219] Guest: 2024-01-30 21:15:50.899487672 +0000 UTC Remote: 2024-01-30 21:15:50.809814577 +0000 UTC m=+27.462505075 (delta=89.673095ms)
	I0130 21:15:50.922394  656435 fix.go:190] guest clock delta is within tolerance: 89.673095ms
	I0130 21:15:50.922405  656435 start.go:83] releasing machines lock for "ingress-addon-legacy-298651", held for 23.23422974s
	I0130 21:15:50.922435  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:50.922828  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetIP
	I0130 21:15:50.925832  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.926254  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.926285  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.926481  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:50.927011  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:50.927214  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:15:50.927302  656435 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 21:15:50.927358  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.927504  656435 ssh_runner.go:195] Run: cat /version.json
	I0130 21:15:50.927532  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:15:50.930125  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.930325  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.930512  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.930540  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.930664  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:50.930685  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:50.930706  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.930873  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:15:50.930882  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.931042  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:15:50.931047  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.931204  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:15:50.931213  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:15:50.931352  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:15:51.041774  656435 ssh_runner.go:195] Run: systemctl --version
	I0130 21:15:51.047384  656435 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 21:15:51.203511  656435 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 21:15:51.209600  656435 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 21:15:51.209664  656435 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 21:15:51.223239  656435 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 21:15:51.223268  656435 start.go:475] detecting cgroup driver to use...
	I0130 21:15:51.223341  656435 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 21:15:51.237545  656435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 21:15:51.251479  656435 docker.go:217] disabling cri-docker service (if available) ...
	I0130 21:15:51.251551  656435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 21:15:51.264723  656435 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 21:15:51.277982  656435 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 21:15:51.392370  656435 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 21:15:51.509377  656435 docker.go:233] disabling docker service ...
	I0130 21:15:51.509450  656435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 21:15:51.522715  656435 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 21:15:51.534937  656435 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 21:15:51.643566  656435 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 21:15:51.751496  656435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 21:15:51.764017  656435 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 21:15:51.782069  656435 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0130 21:15:51.782131  656435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:15:51.791744  656435 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 21:15:51.791821  656435 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:15:51.801129  656435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:15:51.810673  656435 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:15:51.819825  656435 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 21:15:51.829502  656435 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 21:15:51.837375  656435 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 21:15:51.837419  656435 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 21:15:51.849148  656435 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 21:15:51.858917  656435 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 21:15:51.958372  656435 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 21:15:52.123013  656435 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 21:15:52.123122  656435 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 21:15:52.128378  656435 start.go:543] Will wait 60s for crictl version
	I0130 21:15:52.128461  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:52.132154  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 21:15:52.179604  656435 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 21:15:52.179720  656435 ssh_runner.go:195] Run: crio --version
	I0130 21:15:52.234479  656435 ssh_runner.go:195] Run: crio --version
	I0130 21:15:52.281267  656435 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0130 21:15:52.282767  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetIP
	I0130 21:15:52.285355  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:52.285672  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:15:52.285698  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:15:52.285934  656435 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 21:15:52.289984  656435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:15:52.301775  656435 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0130 21:15:52.301849  656435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:15:52.336607  656435 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0130 21:15:52.336676  656435 ssh_runner.go:195] Run: which lz4
	I0130 21:15:52.340462  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0130 21:15:52.340602  656435 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 21:15:52.344675  656435 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 21:15:52.344714  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0130 21:15:54.130476  656435 crio.go:444] Took 1.789907 seconds to copy over tarball
	I0130 21:15:54.130554  656435 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 21:15:57.460670  656435 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.330068012s)
	I0130 21:15:57.460705  656435 crio.go:451] Took 3.330202 seconds to extract the tarball
	I0130 21:15:57.460716  656435 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 21:15:57.504035  656435 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:15:57.554480  656435 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0130 21:15:57.554519  656435 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 21:15:57.554627  656435 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0130 21:15:57.554647  656435 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0130 21:15:57.554676  656435 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 21:15:57.554702  656435 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 21:15:57.554651  656435 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0130 21:15:57.554607  656435 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:15:57.554619  656435 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 21:15:57.554652  656435 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 21:15:57.556017  656435 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 21:15:57.556034  656435 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 21:15:57.556040  656435 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0130 21:15:57.556050  656435 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0130 21:15:57.556058  656435 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:15:57.556074  656435 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0130 21:15:57.556060  656435 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 21:15:57.556017  656435 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 21:15:57.761802  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0130 21:15:57.761802  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 21:15:57.766693  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0130 21:15:57.768850  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0130 21:15:57.772519  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:15:57.774415  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0130 21:15:57.785785  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0130 21:15:57.789287  656435 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0130 21:15:57.927419  656435 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0130 21:15:57.927490  656435 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 21:15:57.927576  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:57.938742  656435 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0130 21:15:57.938812  656435 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0130 21:15:57.938888  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:57.948806  656435 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0130 21:15:57.948863  656435 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0130 21:15:57.948923  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:57.985506  656435 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0130 21:15:57.985555  656435 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0130 21:15:57.985597  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:58.082111  656435 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0130 21:15:58.082169  656435 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0130 21:15:58.082228  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:58.082227  656435 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0130 21:15:58.082273  656435 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0130 21:15:58.082289  656435 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0130 21:15:58.082316  656435 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0130 21:15:58.082322  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:58.082339  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0130 21:15:58.082349  656435 ssh_runner.go:195] Run: which crictl
	I0130 21:15:58.082376  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0130 21:15:58.082394  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0130 21:15:58.082449  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0130 21:15:58.172663  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0130 21:15:58.172717  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0130 21:15:58.188716  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0130 21:15:58.188815  656435 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0130 21:15:58.188827  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0130 21:15:58.188913  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0130 21:15:58.188956  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0130 21:15:58.217198  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0130 21:15:58.255555  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0130 21:15:58.255640  656435 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0130 21:15:58.255694  656435 cache_images.go:92] LoadImages completed in 701.158257ms
	W0130 21:15:58.255763  656435 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20: no such file or directory
	I0130 21:15:58.255931  656435 ssh_runner.go:195] Run: crio config
	I0130 21:15:58.322837  656435 cni.go:84] Creating CNI manager for ""
	I0130 21:15:58.322868  656435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:15:58.322897  656435 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 21:15:58.322922  656435 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.33 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-298651 NodeName:ingress-addon-legacy-298651 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 21:15:58.323065  656435 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-298651"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 21:15:58.323174  656435 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-298651 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-298651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 21:15:58.323247  656435 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0130 21:15:58.332605  656435 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 21:15:58.332693  656435 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 21:15:58.340975  656435 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (435 bytes)
	I0130 21:15:58.356902  656435 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0130 21:15:58.374882  656435 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0130 21:15:58.392954  656435 ssh_runner.go:195] Run: grep 192.168.39.33	control-plane.minikube.internal$ /etc/hosts
	I0130 21:15:58.397026  656435 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.33	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:15:58.410705  656435 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651 for IP: 192.168.39.33
	I0130 21:15:58.568290  656435 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.568484  656435 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 21:15:58.568545  656435 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 21:15:58.568611  656435 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key
	I0130 21:15:58.568628  656435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt with IP's: []
	I0130 21:15:58.665903  656435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt ...
	I0130 21:15:58.665941  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: {Name:mk1cbf5869a23f7b97c197dac396f363e61ffe63 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.666122  656435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key ...
	I0130 21:15:58.666136  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key: {Name:mkad749ccd8b22e38e0305a596a21091b63b12b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.666202  656435 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key.f7571e27
	I0130 21:15:58.666219  656435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt.f7571e27 with IP's: [192.168.39.33 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 21:15:58.805107  656435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt.f7571e27 ...
	I0130 21:15:58.805146  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt.f7571e27: {Name:mkada06f7b8cc01c07ae6decad0f14067d7b242c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.805308  656435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key.f7571e27 ...
	I0130 21:15:58.805322  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key.f7571e27: {Name:mk4ddfdd7083252a9edb77f72a464bdf88ccb226 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.805394  656435 certs.go:337] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt.f7571e27 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt
	I0130 21:15:58.805519  656435 certs.go:341] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key.f7571e27 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key
	I0130 21:15:58.805592  656435 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.key
	I0130 21:15:58.805615  656435 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.crt with IP's: []
	I0130 21:15:58.935811  656435 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.crt ...
	I0130 21:15:58.935848  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.crt: {Name:mk92ec904abc1431c9944de2b24b111f82f9e149 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.936015  656435 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.key ...
	I0130 21:15:58.936030  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.key: {Name:mka0b3a3aec8f73d0a2e65d14afd13a2e86d5457 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:15:58.936099  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0130 21:15:58.936145  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0130 21:15:58.936163  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0130 21:15:58.936173  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0130 21:15:58.936185  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 21:15:58.936197  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 21:15:58.936209  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 21:15:58.936222  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 21:15:58.936300  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 21:15:58.936335  656435 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 21:15:58.936345  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 21:15:58.936375  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 21:15:58.936402  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 21:15:58.936423  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 21:15:58.936464  656435 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:15:58.936495  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /usr/share/ca-certificates/6477182.pem
	I0130 21:15:58.936511  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:15:58.936526  656435 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem -> /usr/share/ca-certificates/647718.pem
	I0130 21:15:58.937181  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 21:15:58.961973  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 21:15:58.985770  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 21:15:59.008870  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 21:15:59.033654  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 21:15:59.056521  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 21:15:59.080851  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 21:15:59.106026  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 21:15:59.130219  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 21:15:59.154172  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 21:15:59.176979  656435 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 21:15:59.200640  656435 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 21:15:59.216588  656435 ssh_runner.go:195] Run: openssl version
	I0130 21:15:59.222464  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 21:15:59.233683  656435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 21:15:59.238880  656435 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:15:59.238952  656435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 21:15:59.244939  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 21:15:59.255099  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 21:15:59.265604  656435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:15:59.270800  656435 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:15:59.270877  656435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:15:59.276794  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 21:15:59.287013  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 21:15:59.296746  656435 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 21:15:59.301949  656435 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:15:59.302040  656435 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 21:15:59.307886  656435 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 21:15:59.318332  656435 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 21:15:59.322982  656435 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:15:59.323058  656435 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-298651 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-298651 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:15:59.323154  656435 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 21:15:59.323237  656435 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 21:15:59.370508  656435 cri.go:89] found id: ""
	I0130 21:15:59.370594  656435 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 21:15:59.379553  656435 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 21:15:59.389259  656435 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 21:15:59.397852  656435 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 21:15:59.397905  656435 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0130 21:15:59.459175  656435 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0130 21:15:59.459240  656435 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 21:15:59.600550  656435 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 21:15:59.600709  656435 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 21:15:59.600843  656435 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 21:15:59.830036  656435 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 21:15:59.831116  656435 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 21:15:59.831191  656435 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 21:15:59.947138  656435 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 21:16:00.015180  656435 out.go:204]   - Generating certificates and keys ...
	I0130 21:16:00.015317  656435 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 21:16:00.015403  656435 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 21:16:00.161130  656435 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 21:16:00.324379  656435 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 21:16:00.645632  656435 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 21:16:00.927909  656435 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 21:16:01.031425  656435 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 21:16:01.031744  656435 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-298651 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0130 21:16:01.149306  656435 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 21:16:01.149565  656435 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-298651 localhost] and IPs [192.168.39.33 127.0.0.1 ::1]
	I0130 21:16:01.232861  656435 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 21:16:01.492824  656435 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 21:16:01.670535  656435 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 21:16:01.670674  656435 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 21:16:01.806027  656435 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 21:16:02.019849  656435 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 21:16:02.104553  656435 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 21:16:02.297585  656435 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 21:16:02.298373  656435 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 21:16:02.300346  656435 out.go:204]   - Booting up control plane ...
	I0130 21:16:02.300463  656435 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 21:16:02.306188  656435 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 21:16:02.307157  656435 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 21:16:02.307813  656435 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 21:16:02.310036  656435 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 21:16:12.315941  656435 kubeadm.go:322] [apiclient] All control plane components are healthy after 10.008268 seconds
	I0130 21:16:12.316074  656435 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 21:16:12.333291  656435 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 21:16:12.876274  656435 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 21:16:12.876513  656435 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-298651 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0130 21:16:13.389879  656435 kubeadm.go:322] [bootstrap-token] Using token: 07s79b.xjn82i7znpo04zgd
	I0130 21:16:13.391614  656435 out.go:204]   - Configuring RBAC rules ...
	I0130 21:16:13.391772  656435 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 21:16:13.399345  656435 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 21:16:13.411896  656435 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 21:16:13.417175  656435 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 21:16:13.422447  656435 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 21:16:13.426049  656435 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 21:16:13.438397  656435 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 21:16:13.725247  656435 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 21:16:13.838675  656435 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 21:16:13.839609  656435 kubeadm.go:322] 
	I0130 21:16:13.839661  656435 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 21:16:13.839668  656435 kubeadm.go:322] 
	I0130 21:16:13.839784  656435 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 21:16:13.839812  656435 kubeadm.go:322] 
	I0130 21:16:13.839856  656435 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 21:16:13.839912  656435 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 21:16:13.839978  656435 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 21:16:13.840019  656435 kubeadm.go:322] 
	I0130 21:16:13.840096  656435 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 21:16:13.840208  656435 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 21:16:13.840289  656435 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 21:16:13.840299  656435 kubeadm.go:322] 
	I0130 21:16:13.840394  656435 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 21:16:13.840501  656435 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 21:16:13.840514  656435 kubeadm.go:322] 
	I0130 21:16:13.840614  656435 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 07s79b.xjn82i7znpo04zgd \
	I0130 21:16:13.840736  656435 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 21:16:13.840770  656435 kubeadm.go:322]     --control-plane 
	I0130 21:16:13.840780  656435 kubeadm.go:322] 
	I0130 21:16:13.840882  656435 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 21:16:13.840890  656435 kubeadm.go:322] 
	I0130 21:16:13.840995  656435 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 07s79b.xjn82i7znpo04zgd \
	I0130 21:16:13.841099  656435 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 21:16:13.841528  656435 kubeadm.go:322] W0130 21:15:59.450058     964 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0130 21:16:13.841641  656435 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 21:16:13.841786  656435 kubeadm.go:322] W0130 21:16:02.299332     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0130 21:16:13.841890  656435 kubeadm.go:322] W0130 21:16:02.300397     964 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0130 21:16:13.841936  656435 cni.go:84] Creating CNI manager for ""
	I0130 21:16:13.841953  656435 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:16:13.843718  656435 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 21:16:13.844981  656435 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 21:16:13.855852  656435 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 21:16:13.879585  656435 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 21:16:13.879688  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:13.879699  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=ingress-addon-legacy-298651 minikube.k8s.io/updated_at=2024_01_30T21_16_13_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:13.899901  656435 ops.go:34] apiserver oom_adj: -16
	I0130 21:16:14.064050  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:14.564961  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:15.064730  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:15.564529  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:16.064745  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:16.564443  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:17.064513  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:17.565105  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:18.064755  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:18.564683  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:19.065149  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:19.565144  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:20.064677  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:20.564769  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:21.064216  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:21.564818  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:22.065115  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:22.564138  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:23.065002  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:23.564455  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:24.064306  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:24.565127  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:25.064276  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:25.564829  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:26.064992  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:26.564737  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:27.064866  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:27.564228  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:28.064333  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:28.564604  656435 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:16:28.659128  656435 kubeadm.go:1088] duration metric: took 14.779539209s to wait for elevateKubeSystemPrivileges.
	I0130 21:16:28.659169  656435 kubeadm.go:406] StartCluster complete in 29.336122544s
	I0130 21:16:28.659190  656435 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:16:28.659273  656435 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:16:28.660049  656435 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:16:28.660277  656435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 21:16:28.660426  656435 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 21:16:28.660515  656435 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-298651"
	I0130 21:16:28.660540  656435 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-298651"
	I0130 21:16:28.660555  656435 addons.go:234] Setting addon storage-provisioner=true in "ingress-addon-legacy-298651"
	I0130 21:16:28.660563  656435 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-298651"
	I0130 21:16:28.660604  656435 config.go:182] Loaded profile config "ingress-addon-legacy-298651": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0130 21:16:28.660623  656435 host.go:66] Checking if "ingress-addon-legacy-298651" exists ...
	I0130 21:16:28.661060  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:16:28.661097  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:16:28.660936  656435 kapi.go:59] client config for ingress-addon-legacy-298651: &rest.Config{Host:"https://192.168.39.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:16:28.661162  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:16:28.661199  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:16:28.661914  656435 cert_rotation.go:137] Starting client certificate rotation controller
	I0130 21:16:28.677027  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46655
	I0130 21:16:28.677114  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43873
	I0130 21:16:28.677459  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:16:28.677597  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:16:28.678002  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:16:28.678030  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:16:28.678116  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:16:28.678135  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:16:28.678374  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:16:28.678458  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:16:28.678636  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetState
	I0130 21:16:28.678971  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:16:28.679004  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:16:28.680800  656435 kapi.go:59] client config for ingress-addon-legacy-298651: &rest.Config{Host:"https://192.168.39.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:16:28.681064  656435 addons.go:234] Setting addon default-storageclass=true in "ingress-addon-legacy-298651"
	I0130 21:16:28.681104  656435 host.go:66] Checking if "ingress-addon-legacy-298651" exists ...
	I0130 21:16:28.681397  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:16:28.681427  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:16:28.694528  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44793
	I0130 21:16:28.694990  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:16:28.695496  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:16:28.695522  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:16:28.695784  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33849
	I0130 21:16:28.695915  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:16:28.696148  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetState
	I0130 21:16:28.696295  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:16:28.696807  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:16:28.696829  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:16:28.697193  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:16:28.697817  656435 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:16:28.697852  656435 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:16:28.698149  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:16:28.700212  656435 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 21:16:28.702075  656435 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:16:28.702106  656435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 21:16:28.702129  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:16:28.705061  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:16:28.705571  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:16:28.705602  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:16:28.705792  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:16:28.706036  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:16:28.706218  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:16:28.706383  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:16:28.713963  656435 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43273
	I0130 21:16:28.714358  656435 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:16:28.714889  656435 main.go:141] libmachine: Using API Version  1
	I0130 21:16:28.714914  656435 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:16:28.715256  656435 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:16:28.715446  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetState
	I0130 21:16:28.716937  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .DriverName
	I0130 21:16:28.717190  656435 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 21:16:28.717204  656435 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 21:16:28.717219  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHHostname
	I0130 21:16:28.719881  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:16:28.720298  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:40:50:61", ip: ""} in network mk-ingress-addon-legacy-298651: {Iface:virbr1 ExpiryTime:2024-01-30 22:15:43 +0000 UTC Type:0 Mac:52:54:00:40:50:61 Iaid: IPaddr:192.168.39.33 Prefix:24 Hostname:ingress-addon-legacy-298651 Clientid:01:52:54:00:40:50:61}
	I0130 21:16:28.720374  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | domain ingress-addon-legacy-298651 has defined IP address 192.168.39.33 and MAC address 52:54:00:40:50:61 in network mk-ingress-addon-legacy-298651
	I0130 21:16:28.720543  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHPort
	I0130 21:16:28.720749  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHKeyPath
	I0130 21:16:28.720884  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .GetSSHUsername
	I0130 21:16:28.721062  656435 sshutil.go:53] new ssh client: &{IP:192.168.39.33 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/ingress-addon-legacy-298651/id_rsa Username:docker}
	I0130 21:16:28.800621  656435 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 21:16:28.863491  656435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 21:16:28.885047  656435 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 21:16:29.387528  656435 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-298651" context rescaled to 1 replicas
	I0130 21:16:29.387583  656435 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.33 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:16:29.390443  656435 out.go:177] * Verifying Kubernetes components...
	I0130 21:16:29.392202  656435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:16:29.523277  656435 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 21:16:29.655709  656435 main.go:141] libmachine: Making call to close driver server
	I0130 21:16:29.655746  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Close
	I0130 21:16:29.655796  656435 main.go:141] libmachine: Making call to close driver server
	I0130 21:16:29.655820  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Close
	I0130 21:16:29.656094  656435 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:16:29.656115  656435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:16:29.656125  656435 main.go:141] libmachine: Making call to close driver server
	I0130 21:16:29.656134  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Close
	I0130 21:16:29.656234  656435 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:16:29.656252  656435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:16:29.656262  656435 main.go:141] libmachine: Making call to close driver server
	I0130 21:16:29.656272  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Close
	I0130 21:16:29.656236  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Closing plugin on server side
	I0130 21:16:29.656510  656435 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:16:29.656531  656435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:16:29.656725  656435 kapi.go:59] client config for ingress-addon-legacy-298651: &rest.Config{Host:"https://192.168.39.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]u
int8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:16:29.657064  656435 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-298651" to be "Ready" ...
	I0130 21:16:29.658215  656435 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:16:29.658242  656435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:16:29.658298  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Closing plugin on server side
	I0130 21:16:29.685461  656435 node_ready.go:49] node "ingress-addon-legacy-298651" has status "Ready":"True"
	I0130 21:16:29.685504  656435 node_ready.go:38] duration metric: took 28.418056ms waiting for node "ingress-addon-legacy-298651" to be "Ready" ...
	I0130 21:16:29.685515  656435 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:16:29.702335  656435 main.go:141] libmachine: Making call to close driver server
	I0130 21:16:29.702366  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) Calling .Close
	I0130 21:16:29.702648  656435 main.go:141] libmachine: (ingress-addon-legacy-298651) DBG | Closing plugin on server side
	I0130 21:16:29.702659  656435 main.go:141] libmachine: Successfully made call to close driver server
	I0130 21:16:29.702677  656435 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 21:16:29.704618  656435 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0130 21:16:29.706280  656435 addons.go:505] enable addons completed in 1.045870109s: enabled=[storage-provisioner default-storageclass]
	I0130 21:16:29.711478  656435 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:31.720166  656435 pod_ready.go:102] pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace has status "Ready":"False"
	I0130 21:16:34.217848  656435 pod_ready.go:102] pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace has status "Ready":"False"
	I0130 21:16:36.219395  656435 pod_ready.go:102] pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace has status "Ready":"False"
	I0130 21:16:37.718729  656435 pod_ready.go:92] pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:37.718756  656435 pod_ready.go:81] duration metric: took 8.007256338s waiting for pod "coredns-66bff467f8-nxtvt" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.718765  656435 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.723890  656435 pod_ready.go:92] pod "etcd-ingress-addon-legacy-298651" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:37.723914  656435 pod_ready.go:81] duration metric: took 5.141591ms waiting for pod "etcd-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.723926  656435 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.729508  656435 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-298651" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:37.729533  656435 pod_ready.go:81] duration metric: took 5.598412ms waiting for pod "kube-apiserver-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.729549  656435 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.734323  656435 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-298651" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:37.734350  656435 pod_ready.go:81] duration metric: took 4.786502ms waiting for pod "kube-controller-manager-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.734366  656435 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-p74dd" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.739340  656435 pod_ready.go:92] pod "kube-proxy-p74dd" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:37.739361  656435 pod_ready.go:81] duration metric: took 4.988923ms waiting for pod "kube-proxy-p74dd" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.739370  656435 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:37.912788  656435 request.go:629] Waited for 173.320638ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-298651
	I0130 21:16:38.112526  656435 request.go:629] Waited for 196.417624ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes/ingress-addon-legacy-298651
	I0130 21:16:38.116814  656435 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-298651" in "kube-system" namespace has status "Ready":"True"
	I0130 21:16:38.116841  656435 pod_ready.go:81] duration metric: took 377.464489ms waiting for pod "kube-scheduler-ingress-addon-legacy-298651" in "kube-system" namespace to be "Ready" ...
	I0130 21:16:38.116857  656435 pod_ready.go:38] duration metric: took 8.431327878s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:16:38.116881  656435 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:16:38.116960  656435 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:16:38.135118  656435 api_server.go:72] duration metric: took 8.747499873s to wait for apiserver process to appear ...
	I0130 21:16:38.135157  656435 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:16:38.135186  656435 api_server.go:253] Checking apiserver healthz at https://192.168.39.33:8443/healthz ...
	I0130 21:16:38.140779  656435 api_server.go:279] https://192.168.39.33:8443/healthz returned 200:
	ok
	I0130 21:16:38.142091  656435 api_server.go:141] control plane version: v1.18.20
	I0130 21:16:38.142118  656435 api_server.go:131] duration metric: took 6.952893ms to wait for apiserver health ...
	I0130 21:16:38.142129  656435 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:16:38.312618  656435 request.go:629] Waited for 170.390476ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0130 21:16:38.319937  656435 system_pods.go:59] 7 kube-system pods found
	I0130 21:16:38.319979  656435 system_pods.go:61] "coredns-66bff467f8-nxtvt" [8cd673b4-e41d-4227-9151-bb28e25f3f66] Running
	I0130 21:16:38.319984  656435 system_pods.go:61] "etcd-ingress-addon-legacy-298651" [5a1d77ef-9736-4516-bc28-044fd0e0305e] Running
	I0130 21:16:38.319988  656435 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-298651" [5fc6450b-8ac7-49c5-8c52-587c70edf3d7] Running
	I0130 21:16:38.319992  656435 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-298651" [b161b9ff-875a-4c66-a1da-82b17a5a06e2] Running
	I0130 21:16:38.319996  656435 system_pods.go:61] "kube-proxy-p74dd" [15ed81f0-56dc-43cc-aa35-e11b2d5a89b5] Running
	I0130 21:16:38.320000  656435 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-298651" [5264530c-243c-4495-a3cb-14e742589513] Running
	I0130 21:16:38.320003  656435 system_pods.go:61] "storage-provisioner" [cdda4edd-e9a7-484c-b08d-c4c14224049c] Running
	I0130 21:16:38.320010  656435 system_pods.go:74] duration metric: took 177.874321ms to wait for pod list to return data ...
	I0130 21:16:38.320018  656435 default_sa.go:34] waiting for default service account to be created ...
	I0130 21:16:38.512689  656435 request.go:629] Waited for 192.553456ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/default/serviceaccounts
	I0130 21:16:38.515890  656435 default_sa.go:45] found service account: "default"
	I0130 21:16:38.515921  656435 default_sa.go:55] duration metric: took 195.893837ms for default service account to be created ...
	I0130 21:16:38.515931  656435 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 21:16:38.712901  656435 request.go:629] Waited for 196.857565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/namespaces/kube-system/pods
	I0130 21:16:38.718580  656435 system_pods.go:86] 7 kube-system pods found
	I0130 21:16:38.718610  656435 system_pods.go:89] "coredns-66bff467f8-nxtvt" [8cd673b4-e41d-4227-9151-bb28e25f3f66] Running
	I0130 21:16:38.718616  656435 system_pods.go:89] "etcd-ingress-addon-legacy-298651" [5a1d77ef-9736-4516-bc28-044fd0e0305e] Running
	I0130 21:16:38.718621  656435 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-298651" [5fc6450b-8ac7-49c5-8c52-587c70edf3d7] Running
	I0130 21:16:38.718625  656435 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-298651" [b161b9ff-875a-4c66-a1da-82b17a5a06e2] Running
	I0130 21:16:38.718629  656435 system_pods.go:89] "kube-proxy-p74dd" [15ed81f0-56dc-43cc-aa35-e11b2d5a89b5] Running
	I0130 21:16:38.718633  656435 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-298651" [5264530c-243c-4495-a3cb-14e742589513] Running
	I0130 21:16:38.718637  656435 system_pods.go:89] "storage-provisioner" [cdda4edd-e9a7-484c-b08d-c4c14224049c] Running
	I0130 21:16:38.718644  656435 system_pods.go:126] duration metric: took 202.707404ms to wait for k8s-apps to be running ...
	I0130 21:16:38.718652  656435 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:16:38.718700  656435 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:16:38.735392  656435 system_svc.go:56] duration metric: took 16.725177ms WaitForService to wait for kubelet.
	I0130 21:16:38.735425  656435 kubeadm.go:581] duration metric: took 9.347812523s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:16:38.735451  656435 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:16:38.912945  656435 request.go:629] Waited for 177.373132ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.33:8443/api/v1/nodes
	I0130 21:16:38.917038  656435 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:16:38.917077  656435 node_conditions.go:123] node cpu capacity is 2
	I0130 21:16:38.917092  656435 node_conditions.go:105] duration metric: took 181.633197ms to run NodePressure ...
	I0130 21:16:38.917107  656435 start.go:228] waiting for startup goroutines ...
	I0130 21:16:38.917119  656435 start.go:233] waiting for cluster config update ...
	I0130 21:16:38.917142  656435 start.go:242] writing updated cluster config ...
	I0130 21:16:38.917434  656435 ssh_runner.go:195] Run: rm -f paused
	I0130 21:16:38.969257  656435 start.go:600] kubectl: 1.29.1, cluster: 1.18.20 (minor skew: 11)
	I0130 21:16:38.971289  656435 out.go:177] 
	W0130 21:16:38.972852  656435 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0130 21:16:38.974408  656435 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0130 21:16:38.975945  656435 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-298651" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 21:15:40 UTC, ends at Tue 2024-01-30 21:19:52 UTC. --
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.613206655Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706649592613192317,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=b962baf4-672c-4a53-a284-294ed629277e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.613742689Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6d478634-64d7-4006-80a3-47edd1257a16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.613791459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6d478634-64d7-4006-80a3-47edd1257a16 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.614481086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6962691ef80a811e42f719fbaeff63480ad0c892ac0868645ff1b74fa7926e52,PodSandboxId:633bfcc75e94634bd861b0681dfb93948faa8ffd53b677b8a14e7a6a057cf0bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706649577778325056,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5pqvg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c8c9a9e-decb-44ed-ac74-c3b2506cef84,},Annotations:map[string]string{io.kubernetes.container.hash: d6667bc9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d778464fd3a0f3aa3bde9ba8adf9fb2a7ecfa254cd26f1b226e34fa457a3ab25,PodSandboxId:0be0c3bf74c7ac89d1d475816747ef37f8ea12ac614d21e4e4b6117f0416c818,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706649435321193647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb2f507d-94fb-4d2e-85be-c538720fdd66,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 37c39185,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbec1e2539fdc29453bdbb76265250d38041d81ea47ae2f8a1344da597a617ac,PodSandboxId:6295c2032b9938b3dbaf7dda2f8d34b79cbe8375b5c45d73e4372ca9a638ff0f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706649411214416926,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-smdmt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9de330bd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37be44e2b3752a4764d4c0dfedcad746c4b2cbca861fadc1fcc2a60d9cb018b9,PodSandboxId:1e22e953da5ce57b336110138bb05e68b9cc604d9e2353caaf6790692cf59674,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649403492165668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xj7l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f43c44-6ec6-4c71-be47-7304d46c2172,},Annotations:map[string]string{io.kubernetes.container.hash: a355aaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cda82932fdc222093808154c418a863f91dc3d35d5c88eda5158d0229eaa6f,PodSandboxId:88ea8560252727bb9f285b2aa057c2afafa56a875efc44be38b0767641feac2b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649402996833302,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zrjk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c11eab8-bce5-49ca-b1f1-b38e594284f3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca36757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fe83a59de03354e84f7a4003c0d21635e0735704c44fdd2e41664767d0f96e,PodSandboxId:9b110589501fe3ab26f0b4a9a1831e946856b66a6ab7232fd34f39131a597adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706649390907671497,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nxtvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd673b4-e41d-4227-9151-bb28e25f3f66,},Annotations:map[string]string{io.kubernetes.container.hash: f1c92ad5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87642d0389fd0ab71b23750f63d
a72f63f7f2bba496d2256b0bd039b2237300a,PodSandboxId:18e9d9a61c5c1ac2e4bd3d778094bde44e1ef689e0aeccff7790862b12e4df99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706649390480063631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdda4edd-e9a7-484c-b08d-c4c14224049c,},Annotations:map[string]string{io.kubernetes.container.hash: 368891ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41945e1b59128d56b3ff5f5d8b08
b234c43c561909d7f246c7620d201276d6db,PodSandboxId:5a9dd71e8996df101fb985c25cd1fac6b31a4daa98e7932041b45c5b8c39a9c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706649389926495628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p74dd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ed81f0-56dc-43cc-aa35-e11b2d5a89b5,},Annotations:map[string]string{io.kubernetes.container.hash: f53d0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625c57bd3e517d5dd311ce7601448119ff3f45f89b1d77bc56d50af648cf0c3e,Pod
SandboxId:0c772052af7497c15a012ae4feeb2c8bed79d3e99f58bbdfe200745087d49b2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706649365852357470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1fffc4cd7915c7b22a55caa75d8e58a,},Annotations:map[string]string{io.kubernetes.container.hash: d91c1164,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fca98ffe8bb810d1c857a3315258d108903b250fa2f0cb804ca50ba06c9665,PodSandboxId:d62291aef863cbb416fc462b77c6682a3a32
93f6e201455518a35070ca883a18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706649364495182498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ea8e41b5494f2a850efe0d6943a062994751196f99d3a1b57ab23c8e42cff4,PodSandboxId:87fc6b
345231b6ec801726ebbb0bf5a58140a7b9af3e866da795519494268ae2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706649364317014817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720ddd6eb525e7f5dfba34a4a0e8c72b5197e2dd2412c82f29dc765b3b549ea1,PodSandboxId:e80ea1b2ff7a
a8168f257633049ef4bebf828f38f6cc672dbea9430e25cb2aec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706649364157357949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c075e7c2a77a0ecddd8bfa2cb1e49b77,},Annotations:map[string]string{io.kubernetes.container.hash: d140de3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6d478634-64d7-4006-80a3-47edd1257a16 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.654362706Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=c96a9c07-fac8-4a24-a9c8-93cb935a1eb3 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.654469804Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=c96a9c07-fac8-4a24-a9c8-93cb935a1eb3 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.656376715Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=7cb1fa52-b700-4dbd-b63a-72697c0b8454 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.656881547Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706649592656868058,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=7cb1fa52-b700-4dbd-b63a-72697c0b8454 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.657606657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=019b0dc5-eed2-4c0f-b8dd-66bc811cc9db name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.657685923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=019b0dc5-eed2-4c0f-b8dd-66bc811cc9db name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.657994430Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6962691ef80a811e42f719fbaeff63480ad0c892ac0868645ff1b74fa7926e52,PodSandboxId:633bfcc75e94634bd861b0681dfb93948faa8ffd53b677b8a14e7a6a057cf0bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706649577778325056,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5pqvg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c8c9a9e-decb-44ed-ac74-c3b2506cef84,},Annotations:map[string]string{io.kubernetes.container.hash: d6667bc9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d778464fd3a0f3aa3bde9ba8adf9fb2a7ecfa254cd26f1b226e34fa457a3ab25,PodSandboxId:0be0c3bf74c7ac89d1d475816747ef37f8ea12ac614d21e4e4b6117f0416c818,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706649435321193647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb2f507d-94fb-4d2e-85be-c538720fdd66,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 37c39185,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbec1e2539fdc29453bdbb76265250d38041d81ea47ae2f8a1344da597a617ac,PodSandboxId:6295c2032b9938b3dbaf7dda2f8d34b79cbe8375b5c45d73e4372ca9a638ff0f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706649411214416926,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-smdmt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9de330bd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37be44e2b3752a4764d4c0dfedcad746c4b2cbca861fadc1fcc2a60d9cb018b9,PodSandboxId:1e22e953da5ce57b336110138bb05e68b9cc604d9e2353caaf6790692cf59674,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649403492165668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xj7l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f43c44-6ec6-4c71-be47-7304d46c2172,},Annotations:map[string]string{io.kubernetes.container.hash: a355aaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cda82932fdc222093808154c418a863f91dc3d35d5c88eda5158d0229eaa6f,PodSandboxId:88ea8560252727bb9f285b2aa057c2afafa56a875efc44be38b0767641feac2b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649402996833302,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zrjk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c11eab8-bce5-49ca-b1f1-b38e594284f3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca36757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fe83a59de03354e84f7a4003c0d21635e0735704c44fdd2e41664767d0f96e,PodSandboxId:9b110589501fe3ab26f0b4a9a1831e946856b66a6ab7232fd34f39131a597adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706649390907671497,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nxtvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd673b4-e41d-4227-9151-bb28e25f3f66,},Annotations:map[string]string{io.kubernetes.container.hash: f1c92ad5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87642d0389fd0ab71b23750f63d
a72f63f7f2bba496d2256b0bd039b2237300a,PodSandboxId:18e9d9a61c5c1ac2e4bd3d778094bde44e1ef689e0aeccff7790862b12e4df99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706649390480063631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdda4edd-e9a7-484c-b08d-c4c14224049c,},Annotations:map[string]string{io.kubernetes.container.hash: 368891ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41945e1b59128d56b3ff5f5d8b08
b234c43c561909d7f246c7620d201276d6db,PodSandboxId:5a9dd71e8996df101fb985c25cd1fac6b31a4daa98e7932041b45c5b8c39a9c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706649389926495628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p74dd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ed81f0-56dc-43cc-aa35-e11b2d5a89b5,},Annotations:map[string]string{io.kubernetes.container.hash: f53d0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625c57bd3e517d5dd311ce7601448119ff3f45f89b1d77bc56d50af648cf0c3e,Pod
SandboxId:0c772052af7497c15a012ae4feeb2c8bed79d3e99f58bbdfe200745087d49b2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706649365852357470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1fffc4cd7915c7b22a55caa75d8e58a,},Annotations:map[string]string{io.kubernetes.container.hash: d91c1164,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fca98ffe8bb810d1c857a3315258d108903b250fa2f0cb804ca50ba06c9665,PodSandboxId:d62291aef863cbb416fc462b77c6682a3a32
93f6e201455518a35070ca883a18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706649364495182498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ea8e41b5494f2a850efe0d6943a062994751196f99d3a1b57ab23c8e42cff4,PodSandboxId:87fc6b
345231b6ec801726ebbb0bf5a58140a7b9af3e866da795519494268ae2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706649364317014817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720ddd6eb525e7f5dfba34a4a0e8c72b5197e2dd2412c82f29dc765b3b549ea1,PodSandboxId:e80ea1b2ff7a
a8168f257633049ef4bebf828f38f6cc672dbea9430e25cb2aec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706649364157357949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c075e7c2a77a0ecddd8bfa2cb1e49b77,},Annotations:map[string]string{io.kubernetes.container.hash: d140de3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=019b0dc5-eed2-4c0f-b8dd-66bc811cc9db name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.696735670Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7659ad6b-e659-49a6-99c5-c7ab48ceb544 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.696843863Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7659ad6b-e659-49a6-99c5-c7ab48ceb544 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.704317057Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=68c8ec18-5d6a-47d8-acf6-969fb2c6d029 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.704814619Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706649592704798328,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=68c8ec18-5d6a-47d8-acf6-969fb2c6d029 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.705837142Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eac13d8e-9049-4b60-a6a8-5af35822c443 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.705910224Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eac13d8e-9049-4b60-a6a8-5af35822c443 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.706136476Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6962691ef80a811e42f719fbaeff63480ad0c892ac0868645ff1b74fa7926e52,PodSandboxId:633bfcc75e94634bd861b0681dfb93948faa8ffd53b677b8a14e7a6a057cf0bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706649577778325056,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5pqvg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c8c9a9e-decb-44ed-ac74-c3b2506cef84,},Annotations:map[string]string{io.kubernetes.container.hash: d6667bc9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d778464fd3a0f3aa3bde9ba8adf9fb2a7ecfa254cd26f1b226e34fa457a3ab25,PodSandboxId:0be0c3bf74c7ac89d1d475816747ef37f8ea12ac614d21e4e4b6117f0416c818,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706649435321193647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb2f507d-94fb-4d2e-85be-c538720fdd66,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 37c39185,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbec1e2539fdc29453bdbb76265250d38041d81ea47ae2f8a1344da597a617ac,PodSandboxId:6295c2032b9938b3dbaf7dda2f8d34b79cbe8375b5c45d73e4372ca9a638ff0f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706649411214416926,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-smdmt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9de330bd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37be44e2b3752a4764d4c0dfedcad746c4b2cbca861fadc1fcc2a60d9cb018b9,PodSandboxId:1e22e953da5ce57b336110138bb05e68b9cc604d9e2353caaf6790692cf59674,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649403492165668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xj7l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f43c44-6ec6-4c71-be47-7304d46c2172,},Annotations:map[string]string{io.kubernetes.container.hash: a355aaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cda82932fdc222093808154c418a863f91dc3d35d5c88eda5158d0229eaa6f,PodSandboxId:88ea8560252727bb9f285b2aa057c2afafa56a875efc44be38b0767641feac2b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649402996833302,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zrjk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c11eab8-bce5-49ca-b1f1-b38e594284f3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca36757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fe83a59de03354e84f7a4003c0d21635e0735704c44fdd2e41664767d0f96e,PodSandboxId:9b110589501fe3ab26f0b4a9a1831e946856b66a6ab7232fd34f39131a597adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706649390907671497,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nxtvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd673b4-e41d-4227-9151-bb28e25f3f66,},Annotations:map[string]string{io.kubernetes.container.hash: f1c92ad5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87642d0389fd0ab71b23750f63d
a72f63f7f2bba496d2256b0bd039b2237300a,PodSandboxId:18e9d9a61c5c1ac2e4bd3d778094bde44e1ef689e0aeccff7790862b12e4df99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706649390480063631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdda4edd-e9a7-484c-b08d-c4c14224049c,},Annotations:map[string]string{io.kubernetes.container.hash: 368891ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41945e1b59128d56b3ff5f5d8b08
b234c43c561909d7f246c7620d201276d6db,PodSandboxId:5a9dd71e8996df101fb985c25cd1fac6b31a4daa98e7932041b45c5b8c39a9c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706649389926495628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p74dd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ed81f0-56dc-43cc-aa35-e11b2d5a89b5,},Annotations:map[string]string{io.kubernetes.container.hash: f53d0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625c57bd3e517d5dd311ce7601448119ff3f45f89b1d77bc56d50af648cf0c3e,Pod
SandboxId:0c772052af7497c15a012ae4feeb2c8bed79d3e99f58bbdfe200745087d49b2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706649365852357470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1fffc4cd7915c7b22a55caa75d8e58a,},Annotations:map[string]string{io.kubernetes.container.hash: d91c1164,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fca98ffe8bb810d1c857a3315258d108903b250fa2f0cb804ca50ba06c9665,PodSandboxId:d62291aef863cbb416fc462b77c6682a3a32
93f6e201455518a35070ca883a18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706649364495182498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ea8e41b5494f2a850efe0d6943a062994751196f99d3a1b57ab23c8e42cff4,PodSandboxId:87fc6b
345231b6ec801726ebbb0bf5a58140a7b9af3e866da795519494268ae2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706649364317014817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720ddd6eb525e7f5dfba34a4a0e8c72b5197e2dd2412c82f29dc765b3b549ea1,PodSandboxId:e80ea1b2ff7a
a8168f257633049ef4bebf828f38f6cc672dbea9430e25cb2aec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706649364157357949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c075e7c2a77a0ecddd8bfa2cb1e49b77,},Annotations:map[string]string{io.kubernetes.container.hash: d140de3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eac13d8e-9049-4b60-a6a8-5af35822c443 name=/runtime.v1.RuntimeSer
vice/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.744459178Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dcf647b7-e786-4caa-95f4-28f3941cee4b name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.744545509Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dcf647b7-e786-4caa-95f4-28f3941cee4b name=/runtime.v1.RuntimeService/Version
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.746332119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=44eee217-e971-45af-91f9-e130c0a9c464 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.746808972Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706649592746793794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:203420,},InodesUsed:&UInt64Value{Value:85,},},},}" file="go-grpc-middleware/chain.go:25" id=44eee217-e971-45af-91f9-e130c0a9c464 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.747696304Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ff0496aa-c000-4da5-b8d8-0a3b702042ba name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.747770348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ff0496aa-c000-4da5-b8d8-0a3b702042ba name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:19:52 ingress-addon-legacy-298651 crio[721]: time="2024-01-30 21:19:52.748090924Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6962691ef80a811e42f719fbaeff63480ad0c892ac0868645ff1b74fa7926e52,PodSandboxId:633bfcc75e94634bd861b0681dfb93948faa8ffd53b677b8a14e7a6a057cf0bc,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,State:CONTAINER_RUNNING,CreatedAt:1706649577778325056,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-5pqvg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 8c8c9a9e-decb-44ed-ac74-c3b2506cef84,},Annotations:map[string]string{io.kubernetes.container.hash: d6667bc9,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d778464fd3a0f3aa3bde9ba8adf9fb2a7ecfa254cd26f1b226e34fa457a3ab25,PodSandboxId:0be0c3bf74c7ac89d1d475816747ef37f8ea12ac614d21e4e4b6117f0416c818,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25,State:CONTAINER_RUNNING,CreatedAt:1706649435321193647,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: fb2f507d-94fb-4d2e-85be-c538720fdd66,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 37c39185,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cbec1e2539fdc29453bdbb76265250d38041d81ea47ae2f8a1344da597a617ac,PodSandboxId:6295c2032b9938b3dbaf7dda2f8d34b79cbe8375b5c45d73e4372ca9a638ff0f,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1706649411214416926,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-smdmt,io
.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea,},Annotations:map[string]string{io.kubernetes.container.hash: 9de330bd,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:37be44e2b3752a4764d4c0dfedcad746c4b2cbca861fadc1fcc2a60d9cb018b9,PodSandboxId:1e22e953da5ce57b336110138bb05e68b9cc604d9e2353caaf6790692cf59674,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34e
a58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649403492165668,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-xj7l8,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: c2f43c44-6ec6-4c71-be47-7304d46c2172,},Annotations:map[string]string{io.kubernetes.container.hash: a355aaf7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73cda82932fdc222093808154c418a863f91dc3d35d5c88eda5158d0229eaa6f,PodSandboxId:88ea8560252727bb9f285b2aa057c2afafa56a875efc44be38b0767641feac2b,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook
-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1706649402996833302,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-zrjk6,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 0c11eab8-bce5-49ca-b1f1-b38e594284f3,},Annotations:map[string]string{io.kubernetes.container.hash: 9ca36757,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:12fe83a59de03354e84f7a4003c0d21635e0735704c44fdd2e41664767d0f96e,PodSandboxId:9b110589501fe3ab26f0b4a9a1831e946856b66a6ab7232fd34f39131a597adc,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec
{Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1706649390907671497,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-nxtvt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8cd673b4-e41d-4227-9151-bb28e25f3f66,},Annotations:map[string]string{io.kubernetes.container.hash: f1c92ad5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:87642d0389fd0ab71b23750f63d
a72f63f7f2bba496d2256b0bd039b2237300a,PodSandboxId:18e9d9a61c5c1ac2e4bd3d778094bde44e1ef689e0aeccff7790862b12e4df99,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706649390480063631,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cdda4edd-e9a7-484c-b08d-c4c14224049c,},Annotations:map[string]string{io.kubernetes.container.hash: 368891ac,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:41945e1b59128d56b3ff5f5d8b08
b234c43c561909d7f246c7620d201276d6db,PodSandboxId:5a9dd71e8996df101fb985c25cd1fac6b31a4daa98e7932041b45c5b8c39a9c4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1706649389926495628,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-p74dd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 15ed81f0-56dc-43cc-aa35-e11b2d5a89b5,},Annotations:map[string]string{io.kubernetes.container.hash: f53d0d36,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:625c57bd3e517d5dd311ce7601448119ff3f45f89b1d77bc56d50af648cf0c3e,Pod
SandboxId:0c772052af7497c15a012ae4feeb2c8bed79d3e99f58bbdfe200745087d49b2c,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1706649365852357470,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d1fffc4cd7915c7b22a55caa75d8e58a,},Annotations:map[string]string{io.kubernetes.container.hash: d91c1164,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:89fca98ffe8bb810d1c857a3315258d108903b250fa2f0cb804ca50ba06c9665,PodSandboxId:d62291aef863cbb416fc462b77c6682a3a32
93f6e201455518a35070ca883a18,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1706649364495182498,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f7ea8e41b5494f2a850efe0d6943a062994751196f99d3a1b57ab23c8e42cff4,PodSandboxId:87fc6b
345231b6ec801726ebbb0bf5a58140a7b9af3e866da795519494268ae2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1706649364317014817,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:720ddd6eb525e7f5dfba34a4a0e8c72b5197e2dd2412c82f29dc765b3b549ea1,PodSandboxId:e80ea1b2ff7a
a8168f257633049ef4bebf828f38f6cc672dbea9430e25cb2aec,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1706649364157357949,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-298651,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c075e7c2a77a0ecddd8bfa2cb1e49b77,},Annotations:map[string]string{io.kubernetes.container.hash: d140de3d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ff0496aa-c000-4da5-b8d8-0a3b702042ba name=/runtime.v1.RuntimeSer
vice/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6962691ef80a8       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7            15 seconds ago      Running             hello-world-app           0                   633bfcc75e946       hello-world-app-5f5d8b66bb-5pqvg
	d778464fd3a0f       docker.io/library/nginx@sha256:5b7ff23e6861b908f034b82d2cf77a295488e0d13271e5438ac211fcf9ed9b25                    2 minutes ago       Running             nginx                     0                   0be0c3bf74c7a       nginx
	cbec1e2539fdc       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   3 minutes ago       Exited              controller                0                   6295c2032b993       ingress-nginx-controller-7fcf777cb7-smdmt
	37be44e2b3752       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              patch                     0                   1e22e953da5ce       ingress-nginx-admission-patch-xj7l8
	73cda82932fdc       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     3 minutes ago       Exited              create                    0                   88ea856025272       ingress-nginx-admission-create-zrjk6
	12fe83a59de03       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   9b110589501fe       coredns-66bff467f8-nxtvt
	87642d0389fd0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Running             storage-provisioner       0                   18e9d9a61c5c1       storage-provisioner
	41945e1b59128       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   5a9dd71e8996d       kube-proxy-p74dd
	625c57bd3e517       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   0c772052af749       etcd-ingress-addon-legacy-298651
	89fca98ffe8bb       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   d62291aef863c       kube-controller-manager-ingress-addon-legacy-298651
	f7ea8e41b5494       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   87fc6b345231b       kube-scheduler-ingress-addon-legacy-298651
	720ddd6eb525e       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   e80ea1b2ff7aa       kube-apiserver-ingress-addon-legacy-298651
	
	
	==> coredns [12fe83a59de03354e84f7a4003c0d21635e0735704c44fdd2e41664767d0f96e] <==
	[INFO] 10.244.0.5:60962 - 59829 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000139592s
	[INFO] 10.244.0.5:53860 - 3584 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000440519s
	[INFO] 10.244.0.5:60962 - 41834 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000346625s
	[INFO] 10.244.0.5:53860 - 51981 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000163223s
	[INFO] 10.244.0.5:60962 - 13176 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083879s
	[INFO] 10.244.0.5:53860 - 49206 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077385s
	[INFO] 10.244.0.5:60962 - 27181 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000149661s
	[INFO] 10.244.0.5:53860 - 43115 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000202224s
	[INFO] 10.244.0.5:53860 - 35688 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069031s
	[INFO] 10.244.0.5:53860 - 41968 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000069809s
	[INFO] 10.244.0.5:53860 - 47769 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000075208s
	[INFO] 10.244.0.5:37886 - 34814 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000076174s
	[INFO] 10.244.0.5:52614 - 64774 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000035128s
	[INFO] 10.244.0.5:52614 - 17338 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000024598s
	[INFO] 10.244.0.5:52614 - 25598 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000021184s
	[INFO] 10.244.0.5:52614 - 7903 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000019554s
	[INFO] 10.244.0.5:52614 - 30387 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000020558s
	[INFO] 10.244.0.5:52614 - 6029 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000019148s
	[INFO] 10.244.0.5:52614 - 52222 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000033749s
	[INFO] 10.244.0.5:37886 - 61441 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000058987s
	[INFO] 10.244.0.5:37886 - 39261 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000030723s
	[INFO] 10.244.0.5:37886 - 40634 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00003399s
	[INFO] 10.244.0.5:37886 - 49646 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00004666s
	[INFO] 10.244.0.5:37886 - 45646 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000033311s
	[INFO] 10.244.0.5:37886 - 52016 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00002784s
	
	
	==> describe nodes <==
	Name:               ingress-addon-legacy-298651
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-298651
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=ingress-addon-legacy-298651
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T21_16_13_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 21:16:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-298651
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 21:19:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 21:19:44 +0000   Tue, 30 Jan 2024 21:16:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 21:19:44 +0000   Tue, 30 Jan 2024 21:16:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 21:19:44 +0000   Tue, 30 Jan 2024 21:16:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 21:19:44 +0000   Tue, 30 Jan 2024 21:16:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.33
	  Hostname:    ingress-addon-legacy-298651
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012808Ki
	  pods:               110
	System Info:
	  Machine ID:                 3f36d5b6c98f40d596297a7bc04bcbfd
	  System UUID:                3f36d5b6-c98f-40d5-9629-7a7bc04bcbfd
	  Boot ID:                    9a687197-a009-4078-9f3e-3732d1155bc9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-5pqvg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m41s
	  kube-system                 coredns-66bff467f8-nxtvt                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m23s
	  kube-system                 etcd-ingress-addon-legacy-298651                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-apiserver-ingress-addon-legacy-298651             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-298651    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 kube-proxy-p74dd                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m24s
	  kube-system                 kube-scheduler-ingress-addon-legacy-298651             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m38s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m50s (x5 over 3m50s)  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x4 over 3m50s)  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x4 over 3m50s)  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m38s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m38s                  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m38s                  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m38s                  kubelet     Node ingress-addon-legacy-298651 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m38s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m28s                  kubelet     Node ingress-addon-legacy-298651 status is now: NodeReady
	  Normal  Starting                 3m22s                  kube-proxy  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 21:15] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093510] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.444280] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.519085] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.153766] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.041234] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.232925] systemd-fstab-generator[646]: Ignoring "noauto" for root device
	[  +0.113748] systemd-fstab-generator[657]: Ignoring "noauto" for root device
	[  +0.143426] systemd-fstab-generator[670]: Ignoring "noauto" for root device
	[  +0.108965] systemd-fstab-generator[681]: Ignoring "noauto" for root device
	[  +0.206037] systemd-fstab-generator[705]: Ignoring "noauto" for root device
	[  +7.970905] systemd-fstab-generator[1033]: Ignoring "noauto" for root device
	[Jan30 21:16] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +10.711374] systemd-fstab-generator[1426]: Ignoring "noauto" for root device
	[ +16.698288] kauditd_printk_skb: 6 callbacks suppressed
	[  +7.199388] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.004456] kauditd_printk_skb: 6 callbacks suppressed
	[Jan30 21:17] kauditd_printk_skb: 7 callbacks suppressed
	[Jan30 21:19] kauditd_printk_skb: 5 callbacks suppressed
	[  +9.985307] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [625c57bd3e517d5dd311ce7601448119ff3f45f89b1d77bc56d50af648cf0c3e] <==
	2024-01-30 21:16:05.986225 W | auth: simple token is not cryptographically signed
	2024-01-30 21:16:05.989933 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2024-01-30 21:16:05.992006 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 21:16:05.992229 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-30 21:16:05.992559 I | embed: listening for peers on 192.168.39.33:2380
	2024-01-30 21:16:05.992670 I | etcdserver: 578695e7c923614c as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2024/01/30 21:16:05 INFO: 578695e7c923614c switched to configuration voters=(6306893150923481420)
	2024-01-30 21:16:05.993002 I | etcdserver/membership: added member 578695e7c923614c [https://192.168.39.33:2380] to cluster ef95fe71d176e4d2
	raft2024/01/30 21:16:06 INFO: 578695e7c923614c is starting a new election at term 1
	raft2024/01/30 21:16:06 INFO: 578695e7c923614c became candidate at term 2
	raft2024/01/30 21:16:06 INFO: 578695e7c923614c received MsgVoteResp from 578695e7c923614c at term 2
	raft2024/01/30 21:16:06 INFO: 578695e7c923614c became leader at term 2
	raft2024/01/30 21:16:06 INFO: raft.node: 578695e7c923614c elected leader 578695e7c923614c at term 2
	2024-01-30 21:16:06.679421 I | etcdserver: setting up the initial cluster version to 3.4
	2024-01-30 21:16:06.681442 N | etcdserver/membership: set the initial cluster version to 3.4
	2024-01-30 21:16:06.681567 I | etcdserver/api: enabled capabilities for version 3.4
	2024-01-30 21:16:06.681612 I | etcdserver: published {Name:ingress-addon-legacy-298651 ClientURLs:[https://192.168.39.33:2379]} to cluster ef95fe71d176e4d2
	2024-01-30 21:16:06.681631 I | embed: ready to serve client requests
	2024-01-30 21:16:06.681813 I | embed: ready to serve client requests
	2024-01-30 21:16:06.683048 I | embed: serving client requests on 192.168.39.33:2379
	2024-01-30 21:16:06.683147 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-30 21:16:28.208714 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/pod-garbage-collector\" " with result "range_response_count:1 size:210" took too long (485.527553ms) to execute
	2024-01-30 21:16:29.287666 W | etcdserver: request "header:<ID:7011134147170000212 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-proxy-p74dd.17af3c93bc2daf6d\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-proxy-p74dd.17af3c93bc2daf6d\" value_size:675 lease:7011134147169999854 >> failure:<>>" with result "size:16" took too long (142.089806ms) to execute
	2024-01-30 21:16:29.355097 W | etcdserver: read-only range request "key:\"/registry/configmaps/kube-system/coredns\" " with result "range_response_count:1 size:577" took too long (240.154872ms) to execute
	2024-01-30 21:16:29.371218 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-p74dd\" " with result "range_response_count:1 size:3588" took too long (256.203412ms) to execute
	
	
	==> kernel <==
	 21:19:53 up 4 min,  0 users,  load average: 0.22, 0.24, 0.11
	Linux ingress-addon-legacy-298651 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [720ddd6eb525e7f5dfba34a4a0e8c72b5197e2dd2412c82f29dc765b3b549ea1] <==
	I0130 21:16:10.632789       1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
	E0130 21:16:10.714504       1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.39.33, ResourceVersion: 0, AdditionalErrorMsg: 
	I0130 21:16:10.717980       1 cache.go:39] Caches are synced for autoregister controller
	I0130 21:16:10.724931       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0130 21:16:10.725015       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0130 21:16:10.725039       1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller 
	I0130 21:16:10.742327       1 shared_informer.go:230] Caches are synced for crd-autoregister 
	I0130 21:16:11.616856       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0130 21:16:11.616900       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0130 21:16:11.637748       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0130 21:16:11.643530       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0130 21:16:11.643615       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0130 21:16:12.135388       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0130 21:16:12.175066       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0130 21:16:12.346130       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.33]
	I0130 21:16:12.347123       1 controller.go:609] quota admission added evaluator for: endpoints
	I0130 21:16:12.350845       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0130 21:16:12.967425       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0130 21:16:13.679071       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0130 21:16:13.818680       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0130 21:16:14.139379       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0130 21:16:28.949754       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0130 21:16:29.070008       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0130 21:16:39.805513       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0130 21:17:11.082414       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	
	
	==> kube-controller-manager [89fca98ffe8bb810d1c857a3315258d108903b250fa2f0cb804ca50ba06c9665] <==
	I0130 21:16:28.960989       1 shared_informer.go:230] Caches are synced for ReplicaSet 
	I0130 21:16:28.973495       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0130 21:16:28.973653       1 shared_informer.go:230] Caches are synced for job 
	I0130 21:16:28.973717       1 shared_informer.go:230] Caches are synced for endpoint_slice 
	I0130 21:16:28.991810       1 shared_informer.go:230] Caches are synced for ReplicationController 
	I0130 21:16:28.991916       1 shared_informer.go:230] Caches are synced for HPA 
	I0130 21:16:28.992863       1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"e029c6c8-6e8a-4726-b912-67384827d6a7", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-p74dd
	I0130 21:16:29.001601       1 shared_informer.go:230] Caches are synced for PVC protection 
	I0130 21:16:29.025860       1 shared_informer.go:230] Caches are synced for deployment 
	I0130 21:16:29.026650       1 shared_informer.go:230] Caches are synced for endpoint 
	I0130 21:16:29.026720       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0130 21:16:29.026734       1 shared_informer.go:230] Caches are synced for disruption 
	I0130 21:16:29.034970       1 disruption.go:339] Sending events to api server.
	I0130 21:16:29.026756       1 shared_informer.go:230] Caches are synced for resource quota 
	I0130 21:16:29.341478       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"3be5297b-861c-4a1b-965b-0c49195958d1", APIVersion:"apps/v1", ResourceVersion:"322", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 1
	I0130 21:16:29.395319       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"6e96130e-a3f4-443c-bf70-c0a51fbcc68b", APIVersion:"apps/v1", ResourceVersion:"335", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-nxtvt
	I0130 21:16:39.782585       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"18b90b68-4628-4a27-b667-21d0c153d18e", APIVersion:"apps/v1", ResourceVersion:"424", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0130 21:16:39.809128       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"767aba69-832b-4220-9a17-8c9d6d0025c5", APIVersion:"apps/v1", ResourceVersion:"425", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-smdmt
	I0130 21:16:39.840882       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b54288ae-47ed-484e-a51c-e0811dfd9b06", APIVersion:"batch/v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-zrjk6
	I0130 21:16:39.916585       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bb93e464-18ab-4319-930c-b384306964c3", APIVersion:"batch/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-xj7l8
	I0130 21:16:43.396344       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b54288ae-47ed-484e-a51c-e0811dfd9b06", APIVersion:"batch/v1", ResourceVersion:"438", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0130 21:16:44.402529       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"bb93e464-18ab-4319-930c-b384306964c3", APIVersion:"batch/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0130 21:19:34.711823       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"c99783cb-c02e-404f-a284-dcae363cca67", APIVersion:"apps/v1", ResourceVersion:"651", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0130 21:19:34.727220       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"704d2ef5-0b88-44d1-865f-55ea0433d957", APIVersion:"apps/v1", ResourceVersion:"652", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-5pqvg
	E0130 21:19:50.013627       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-nq95z" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	
	==> kube-proxy [41945e1b59128d56b3ff5f5d8b08b234c43c561909d7f246c7620d201276d6db] <==
	W0130 21:16:30.183801       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0130 21:16:30.194764       1 node.go:136] Successfully retrieved node IP: 192.168.39.33
	I0130 21:16:30.194811       1 server_others.go:186] Using iptables Proxier.
	I0130 21:16:30.195036       1 server.go:583] Version: v1.18.20
	I0130 21:16:30.197229       1 config.go:133] Starting endpoints config controller
	I0130 21:16:30.197347       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0130 21:16:30.197562       1 config.go:315] Starting service config controller
	I0130 21:16:30.202809       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0130 21:16:30.298016       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0130 21:16:30.303038       1 shared_informer.go:230] Caches are synced for service config 
	
	
	==> kube-scheduler [f7ea8e41b5494f2a850efe0d6943a062994751196f99d3a1b57ab23c8e42cff4] <==
	I0130 21:16:10.739939       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0130 21:16:10.740058       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0130 21:16:10.742618       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0130 21:16:10.742930       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 21:16:10.742962       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 21:16:10.743027       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0130 21:16:10.751713       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 21:16:10.751845       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 21:16:10.751987       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 21:16:10.752056       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 21:16:10.752073       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 21:16:10.752228       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 21:16:10.752969       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 21:16:10.753957       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 21:16:10.759041       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 21:16:10.759143       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 21:16:10.759203       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 21:16:10.759313       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 21:16:11.569403       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 21:16:11.569410       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 21:16:11.607131       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 21:16:11.628805       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 21:16:11.780562       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 21:16:11.966639       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0130 21:16:14.143637       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 21:15:40 UTC, ends at Tue 2024-01-30 21:19:53 UTC. --
	Jan 30 21:16:45 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:16:45.503389    1433 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2f43c44-6ec6-4c71-be47-7304d46c2172-ingress-nginx-admission-token-bxpg8" (OuterVolumeSpecName: "ingress-nginx-admission-token-bxpg8") pod "c2f43c44-6ec6-4c71-be47-7304d46c2172" (UID: "c2f43c44-6ec6-4c71-be47-7304d46c2172"). InnerVolumeSpecName "ingress-nginx-admission-token-bxpg8". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 21:16:45 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:16:45.598347    1433 reconciler.go:319] Volume detached for volume "ingress-nginx-admission-token-bxpg8" (UniqueName: "kubernetes.io/secret/c2f43c44-6ec6-4c71-be47-7304d46c2172-ingress-nginx-admission-token-bxpg8") on node "ingress-addon-legacy-298651" DevicePath ""
	Jan 30 21:16:45 ingress-addon-legacy-298651 kubelet[1433]: W0130 21:16:45.600117    1433 pod_container_deletor.go:77] Container "1e22e953da5ce57b336110138bb05e68b9cc604d9e2353caaf6790692cf59674" not found in pod's containers
	Jan 30 21:16:52 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:16:52.579104    1433 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 21:16:52 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:16:52.721019    1433 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "minikube-ingress-dns-token-z7sgp" (UniqueName: "kubernetes.io/secret/a56fedce-1ad9-4a5a-8e53-9823a540f003-minikube-ingress-dns-token-z7sgp") pod "kube-ingress-dns-minikube" (UID: "a56fedce-1ad9-4a5a-8e53-9823a540f003")
	Jan 30 21:17:11 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:17:11.269320    1433 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 21:17:11 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:17:11.391100    1433 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zdl6f" (UniqueName: "kubernetes.io/secret/fb2f507d-94fb-4d2e-85be-c538720fdd66-default-token-zdl6f") pod "nginx" (UID: "fb2f507d-94fb-4d2e-85be-c538720fdd66")
	Jan 30 21:19:34 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:34.771234    1433 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Jan 30 21:19:34 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:34.887542    1433 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-zdl6f" (UniqueName: "kubernetes.io/secret/8c8c9a9e-decb-44ed-ac74-c3b2506cef84-default-token-zdl6f") pod "hello-world-app-5f5d8b66bb-5pqvg" (UID: "8c8c9a9e-decb-44ed-ac74-c3b2506cef84")
	Jan 30 21:19:36 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:36.356090    1433 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c110bdfa1e4f7a45f903ec5f628084409ed59ef79224f4243851aec89b9cbafb
	Jan 30 21:19:36 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:36.703885    1433 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: c110bdfa1e4f7a45f903ec5f628084409ed59ef79224f4243851aec89b9cbafb
	Jan 30 21:19:36 ingress-addon-legacy-298651 kubelet[1433]: E0130 21:19:36.704529    1433 remote_runtime.go:295] ContainerStatus "c110bdfa1e4f7a45f903ec5f628084409ed59ef79224f4243851aec89b9cbafb" from runtime service failed: rpc error: code = NotFound desc = could not find container "c110bdfa1e4f7a45f903ec5f628084409ed59ef79224f4243851aec89b9cbafb": container with ID starting with c110bdfa1e4f7a45f903ec5f628084409ed59ef79224f4243851aec89b9cbafb not found: ID does not exist
	Jan 30 21:19:37 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:37.501052    1433 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-z7sgp" (UniqueName: "kubernetes.io/secret/a56fedce-1ad9-4a5a-8e53-9823a540f003-minikube-ingress-dns-token-z7sgp") pod "a56fedce-1ad9-4a5a-8e53-9823a540f003" (UID: "a56fedce-1ad9-4a5a-8e53-9823a540f003")
	Jan 30 21:19:37 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:37.516173    1433 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a56fedce-1ad9-4a5a-8e53-9823a540f003-minikube-ingress-dns-token-z7sgp" (OuterVolumeSpecName: "minikube-ingress-dns-token-z7sgp") pod "a56fedce-1ad9-4a5a-8e53-9823a540f003" (UID: "a56fedce-1ad9-4a5a-8e53-9823a540f003"). InnerVolumeSpecName "minikube-ingress-dns-token-z7sgp". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 21:19:37 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:37.601518    1433 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-z7sgp" (UniqueName: "kubernetes.io/secret/a56fedce-1ad9-4a5a-8e53-9823a540f003-minikube-ingress-dns-token-z7sgp") on node "ingress-addon-legacy-298651" DevicePath ""
	Jan 30 21:19:45 ingress-addon-legacy-298651 kubelet[1433]: E0130 21:19:45.210027    1433 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-smdmt.17af3cc16a53d176", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-smdmt", UID:"584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-298651"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16677dc4c55a776, ext:211561317217, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16677dc4c55a776, ext:211561317217, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-smdmt.17af3cc16a53d176" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 30 21:19:45 ingress-addon-legacy-298651 kubelet[1433]: E0130 21:19:45.228165    1433 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-smdmt.17af3cc16a53d176", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-smdmt", UID:"584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea", APIVersion:"v1", ResourceVersion:"431", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-298651"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc16677dc4c55a776, ext:211561317217, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc16677dc4d0332da, ext:211572690632, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-smdmt.17af3cc16a53d176" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Jan 30 21:19:47 ingress-addon-legacy-298651 kubelet[1433]: W0130 21:19:47.501957    1433 pod_container_deletor.go:77] Container "6295c2032b9938b3dbaf7dda2f8d34b79cbe8375b5c45d73e4372ca9a638ff0f" not found in pod's containers
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.341368    1433 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-d9wlx" (UniqueName: "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-ingress-nginx-token-d9wlx") pod "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea" (UID: "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea")
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.341405    1433 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-webhook-cert") pod "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea" (UID: "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea")
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.344417    1433 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-ingress-nginx-token-d9wlx" (OuterVolumeSpecName: "ingress-nginx-token-d9wlx") pod "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea" (UID: "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea"). InnerVolumeSpecName "ingress-nginx-token-d9wlx". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.345415    1433 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea" (UID: "584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.442044    1433 reconciler.go:319] Volume detached for volume "ingress-nginx-token-d9wlx" (UniqueName: "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-ingress-nginx-token-d9wlx") on node "ingress-addon-legacy-298651" DevicePath ""
	Jan 30 21:19:49 ingress-addon-legacy-298651 kubelet[1433]: I0130 21:19:49.442080    1433 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea-webhook-cert") on node "ingress-addon-legacy-298651" DevicePath ""
	Jan 30 21:19:50 ingress-addon-legacy-298651 kubelet[1433]: W0130 21:19:50.266113    1433 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/584ba0d5-75e6-4b47-a1ed-d6bd5eb4b1ea/volumes" does not exist
	
	
	==> storage-provisioner [87642d0389fd0ab71b23750f63da72f63f7f2bba496d2256b0bd039b2237300a] <==
	I0130 21:16:30.601171       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 21:16:30.613681       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 21:16:30.613738       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 21:16:30.629617       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 21:16:30.630402       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-298651_69d4f4b1-ceb4-46f2-9184-93b0648c25d7!
	I0130 21:16:30.631450       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b98af226-ec04-4d4c-b6cd-7f34c317860a", APIVersion:"v1", ResourceVersion:"371", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-298651_69d4f4b1-ceb4-46f2-9184-93b0648c25d7 became leader
	I0130 21:16:30.731348       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-298651_69d4f4b1-ceb4-46f2-9184-93b0648c25d7!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-298651 -n ingress-addon-legacy-298651
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-298651 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (181.18s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (687.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:311: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-721181
multinode_test.go:318: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-721181
E0130 21:29:25.156926  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:29:32.717245  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
multinode_test.go:318: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-721181: exit status 82 (2m0.287254198s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-721181"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:320: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-721181" : exit status 82
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-721181 --wait=true -v=8 --alsologtostderr
E0130 21:30:48.205209  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:31:52.587633  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:34:25.159080  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:34:32.717310  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:35:55.763096  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:36:52.587285  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:38:15.633280  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
multinode_test.go:323: (dbg) Done: out/minikube-linux-amd64 start -p multinode-721181 --wait=true -v=8 --alsologtostderr: (9m24.788472345s)
multinode_test.go:328: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-721181
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-721181 -n multinode-721181
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-721181 logs -n 25: (1.510896246s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3145735879/001/cp-test_multinode-721181-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181:/home/docker/cp-test_multinode-721181-m02_multinode-721181.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n multinode-721181 sudo cat                                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /home/docker/cp-test_multinode-721181-m02_multinode-721181.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03:/home/docker/cp-test_multinode-721181-m02_multinode-721181-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n multinode-721181-m03 sudo cat                                   | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /home/docker/cp-test_multinode-721181-m02_multinode-721181-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp testdata/cp-test.txt                                                | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile3145735879/001/cp-test_multinode-721181-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181:/home/docker/cp-test_multinode-721181-m03_multinode-721181.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n multinode-721181 sudo cat                                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /home/docker/cp-test_multinode-721181-m03_multinode-721181.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt                       | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m02:/home/docker/cp-test_multinode-721181-m03_multinode-721181-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n                                                                 | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | multinode-721181-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-721181 ssh -n multinode-721181-m02 sudo cat                                   | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | /home/docker/cp-test_multinode-721181-m03_multinode-721181-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-721181 node stop m03                                                          | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	| node    | multinode-721181 node start                                                             | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC | 30 Jan 24 21:27 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-721181                                                                | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC |                     |
	| stop    | -p multinode-721181                                                                     | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:27 UTC |                     |
	| start   | -p multinode-721181                                                                     | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:29 UTC | 30 Jan 24 21:39 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-721181                                                                | multinode-721181 | jenkins | v1.32.0 | 30 Jan 24 21:39 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:29:45
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:29:45.024607  664102 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:29:45.024731  664102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:29:45.024741  664102 out.go:309] Setting ErrFile to fd 2...
	I0130 21:29:45.024745  664102 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:29:45.024959  664102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:29:45.025578  664102 out.go:303] Setting JSON to false
	I0130 21:29:45.026690  664102 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7937,"bootTime":1706642248,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:29:45.026748  664102 start.go:138] virtualization: kvm guest
	I0130 21:29:45.029323  664102 out.go:177] * [multinode-721181] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:29:45.031015  664102 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 21:29:45.031027  664102 notify.go:220] Checking for updates...
	I0130 21:29:45.032351  664102 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:29:45.033727  664102 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:29:45.034946  664102 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:29:45.036261  664102 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 21:29:45.037560  664102 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 21:29:45.039215  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:29:45.039304  664102 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:29:45.039726  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:29:45.039773  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:29:45.054988  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41963
	I0130 21:29:45.055432  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:29:45.056006  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:29:45.056067  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:29:45.056451  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:29:45.056647  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:29:45.090815  664102 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 21:29:45.091991  664102 start.go:298] selected driver: kvm2
	I0130 21:29:45.092001  664102 start.go:902] validating driver "kvm2" against &{Name:multinode-721181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fals
e ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:29:45.092117  664102 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 21:29:45.092431  664102 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:29:45.092512  664102 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 21:29:45.105989  664102 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 21:29:45.106679  664102 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 21:29:45.106751  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:29:45.106763  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:29:45.106772  664102 start_flags.go:321] config:
	{Name:multinode-721181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-721181 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pro
visioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:29:45.107009  664102 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:29:45.109532  664102 out.go:177] * Starting control plane node multinode-721181 in cluster multinode-721181
	I0130 21:29:45.110970  664102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:29:45.110994  664102 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 21:29:45.111001  664102 cache.go:56] Caching tarball of preloaded images
	I0130 21:29:45.111091  664102 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 21:29:45.111101  664102 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 21:29:45.111222  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:29:45.111430  664102 start.go:365] acquiring machines lock for multinode-721181: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:29:45.111478  664102 start.go:369] acquired machines lock for "multinode-721181" in 24.887µs
	I0130 21:29:45.111494  664102 start.go:96] Skipping create...Using existing machine configuration
	I0130 21:29:45.111502  664102 fix.go:54] fixHost starting: 
	I0130 21:29:45.111743  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:29:45.111774  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:29:45.124579  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39663
	I0130 21:29:45.124984  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:29:45.125459  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:29:45.125497  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:29:45.125804  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:29:45.125974  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:29:45.126100  664102 main.go:141] libmachine: (multinode-721181) Calling .GetState
	I0130 21:29:45.127602  664102 fix.go:102] recreateIfNeeded on multinode-721181: state=Running err=<nil>
	W0130 21:29:45.127637  664102 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 21:29:45.129667  664102 out.go:177] * Updating the running kvm2 "multinode-721181" VM ...
	I0130 21:29:45.130963  664102 machine.go:88] provisioning docker machine ...
	I0130 21:29:45.130979  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:29:45.131184  664102 main.go:141] libmachine: (multinode-721181) Calling .GetMachineName
	I0130 21:29:45.131339  664102 buildroot.go:166] provisioning hostname "multinode-721181"
	I0130 21:29:45.131359  664102 main.go:141] libmachine: (multinode-721181) Calling .GetMachineName
	I0130 21:29:45.131484  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:29:45.133739  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:29:45.134188  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:24:28 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:29:45.134234  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:29:45.134292  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:29:45.134451  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:29:45.134602  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:29:45.134754  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:29:45.134911  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:29:45.135261  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0130 21:29:45.135278  664102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-721181 && echo "multinode-721181" | sudo tee /etc/hostname
	I0130 21:30:03.497717  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:09.577756  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:12.649826  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:18.729760  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:21.801736  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:27.881794  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:30.953724  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:37.033801  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:40.105772  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:46.185762  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:49.257725  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:55.337711  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:30:58.409721  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:04.489757  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:07.561765  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:13.641786  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:16.713753  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:22.793766  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:25.865765  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:31.945753  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:35.017681  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:41.097742  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:44.169740  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:50.249739  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:53.321780  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:31:59.401802  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:02.473749  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:08.553811  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:11.625817  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:17.705753  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:20.777774  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:26.857763  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:29.929738  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:36.009771  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:39.081785  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:45.165679  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:48.233742  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:54.313802  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:32:57.385806  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:03.465754  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:06.537719  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:12.617758  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:15.689793  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:21.769778  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:24.841727  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:30.921751  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:33.993795  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:40.073777  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:43.145819  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:49.225807  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:52.297833  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:33:58.377786  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:01.449704  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:07.529804  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:10.605735  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:16.681752  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:19.753808  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:25.833736  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:28.905780  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:34.985729  664102 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.174:22: connect: no route to host
	I0130 21:34:37.988006  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:34:37.988068  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:34:37.990378  664102 machine.go:91] provisioned docker machine in 4m52.859394327s
	I0130 21:34:37.990443  664102 fix.go:56] fixHost completed within 4m52.878941459s
	I0130 21:34:37.990455  664102 start.go:83] releasing machines lock for "multinode-721181", held for 4m52.878969216s
	W0130 21:34:37.990473  664102 start.go:694] error starting host: provision: host is not running
	W0130 21:34:37.990638  664102 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 21:34:37.990649  664102 start.go:709] Will try again in 5 seconds ...
	I0130 21:34:42.991005  664102 start.go:365] acquiring machines lock for multinode-721181: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:34:42.991165  664102 start.go:369] acquired machines lock for "multinode-721181" in 86.084µs
	I0130 21:34:42.991195  664102 start.go:96] Skipping create...Using existing machine configuration
	I0130 21:34:42.991204  664102 fix.go:54] fixHost starting: 
	I0130 21:34:42.991502  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:34:42.991532  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:34:43.006908  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46337
	I0130 21:34:43.007467  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:34:43.007950  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:34:43.007972  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:34:43.008391  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:34:43.008582  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:34:43.008767  664102 main.go:141] libmachine: (multinode-721181) Calling .GetState
	I0130 21:34:43.010425  664102 fix.go:102] recreateIfNeeded on multinode-721181: state=Stopped err=<nil>
	I0130 21:34:43.010449  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	W0130 21:34:43.010635  664102 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 21:34:43.012708  664102 out.go:177] * Restarting existing kvm2 VM for "multinode-721181" ...
	I0130 21:34:43.013909  664102 main.go:141] libmachine: (multinode-721181) Calling .Start
	I0130 21:34:43.014048  664102 main.go:141] libmachine: (multinode-721181) Ensuring networks are active...
	I0130 21:34:43.014915  664102 main.go:141] libmachine: (multinode-721181) Ensuring network default is active
	I0130 21:34:43.015307  664102 main.go:141] libmachine: (multinode-721181) Ensuring network mk-multinode-721181 is active
	I0130 21:34:43.015735  664102 main.go:141] libmachine: (multinode-721181) Getting domain xml...
	I0130 21:34:43.016445  664102 main.go:141] libmachine: (multinode-721181) Creating domain...
	I0130 21:34:44.204219  664102 main.go:141] libmachine: (multinode-721181) Waiting to get IP...
	I0130 21:34:44.205232  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:44.205817  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:44.205914  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:44.205768  664881 retry.go:31] will retry after 189.30727ms: waiting for machine to come up
	I0130 21:34:44.397361  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:44.397934  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:44.397969  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:44.397894  664881 retry.go:31] will retry after 263.450707ms: waiting for machine to come up
	I0130 21:34:44.663414  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:44.663781  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:44.663808  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:44.663747  664881 retry.go:31] will retry after 473.597023ms: waiting for machine to come up
	I0130 21:34:45.139470  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:45.139939  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:45.139972  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:45.139877  664881 retry.go:31] will retry after 556.081771ms: waiting for machine to come up
	I0130 21:34:45.697518  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:45.697913  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:45.697936  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:45.697850  664881 retry.go:31] will retry after 468.810461ms: waiting for machine to come up
	I0130 21:34:46.168043  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:46.168507  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:46.168532  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:46.168467  664881 retry.go:31] will retry after 766.585508ms: waiting for machine to come up
	I0130 21:34:46.936355  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:46.936779  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:46.936801  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:46.936726  664881 retry.go:31] will retry after 914.922573ms: waiting for machine to come up
	I0130 21:34:47.852891  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:47.853358  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:47.853394  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:47.853307  664881 retry.go:31] will retry after 1.15276224s: waiting for machine to come up
	I0130 21:34:49.007992  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:49.008455  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:49.008488  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:49.008396  664881 retry.go:31] will retry after 1.825021125s: waiting for machine to come up
	I0130 21:34:50.834482  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:50.834936  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:50.834965  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:50.834882  664881 retry.go:31] will retry after 2.326201349s: waiting for machine to come up
	I0130 21:34:53.163097  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:53.163627  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:53.163658  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:53.163570  664881 retry.go:31] will retry after 2.196911491s: waiting for machine to come up
	I0130 21:34:55.362997  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:55.363487  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:55.363554  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:55.363465  664881 retry.go:31] will retry after 3.315577463s: waiting for machine to come up
	I0130 21:34:58.680166  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:34:58.680619  664102 main.go:141] libmachine: (multinode-721181) DBG | unable to find current IP address of domain multinode-721181 in network mk-multinode-721181
	I0130 21:34:58.680652  664102 main.go:141] libmachine: (multinode-721181) DBG | I0130 21:34:58.680560  664881 retry.go:31] will retry after 4.311400563s: waiting for machine to come up
	I0130 21:35:02.993228  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:02.993742  664102 main.go:141] libmachine: (multinode-721181) Found IP for machine: 192.168.39.174
	I0130 21:35:02.993763  664102 main.go:141] libmachine: (multinode-721181) Reserving static IP address...
	I0130 21:35:02.993782  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has current primary IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:02.994246  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "multinode-721181", mac: "52:54:00:d2:1b:35", ip: "192.168.39.174"} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:02.994272  664102 main.go:141] libmachine: (multinode-721181) DBG | skip adding static IP to network mk-multinode-721181 - found existing host DHCP lease matching {name: "multinode-721181", mac: "52:54:00:d2:1b:35", ip: "192.168.39.174"}
	I0130 21:35:02.994283  664102 main.go:141] libmachine: (multinode-721181) Reserved static IP address: 192.168.39.174
	I0130 21:35:02.994299  664102 main.go:141] libmachine: (multinode-721181) Waiting for SSH to be available...
	I0130 21:35:02.994326  664102 main.go:141] libmachine: (multinode-721181) DBG | Getting to WaitForSSH function...
	I0130 21:35:02.996438  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:02.996772  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:02.996816  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:02.996880  664102 main.go:141] libmachine: (multinode-721181) DBG | Using SSH client type: external
	I0130 21:35:02.996942  664102 main.go:141] libmachine: (multinode-721181) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa (-rw-------)
	I0130 21:35:02.996987  664102 main.go:141] libmachine: (multinode-721181) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.174 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 21:35:02.997004  664102 main.go:141] libmachine: (multinode-721181) DBG | About to run SSH command:
	I0130 21:35:02.997012  664102 main.go:141] libmachine: (multinode-721181) DBG | exit 0
	I0130 21:35:03.089228  664102 main.go:141] libmachine: (multinode-721181) DBG | SSH cmd err, output: <nil>: 
	I0130 21:35:03.089686  664102 main.go:141] libmachine: (multinode-721181) Calling .GetConfigRaw
	I0130 21:35:03.090286  664102 main.go:141] libmachine: (multinode-721181) Calling .GetIP
	I0130 21:35:03.092812  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.093182  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.093212  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.093521  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:35:03.093724  664102 machine.go:88] provisioning docker machine ...
	I0130 21:35:03.093748  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:03.093936  664102 main.go:141] libmachine: (multinode-721181) Calling .GetMachineName
	I0130 21:35:03.094139  664102 buildroot.go:166] provisioning hostname "multinode-721181"
	I0130 21:35:03.094162  664102 main.go:141] libmachine: (multinode-721181) Calling .GetMachineName
	I0130 21:35:03.094314  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:03.096387  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.096764  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.096792  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.096894  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:03.097049  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.097205  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.097388  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:03.097569  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:03.097981  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0130 21:35:03.098002  664102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-721181 && echo "multinode-721181" | sudo tee /etc/hostname
	I0130 21:35:03.233259  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-721181
	
	I0130 21:35:03.233288  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:03.235853  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.236214  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.236250  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.236359  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:03.236568  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.236716  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.236868  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:03.237021  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:03.237508  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0130 21:35:03.237539  664102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-721181' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-721181/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-721181' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 21:35:03.368554  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:35:03.368586  664102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 21:35:03.368605  664102 buildroot.go:174] setting up certificates
	I0130 21:35:03.368615  664102 provision.go:83] configureAuth start
	I0130 21:35:03.368623  664102 main.go:141] libmachine: (multinode-721181) Calling .GetMachineName
	I0130 21:35:03.368944  664102 main.go:141] libmachine: (multinode-721181) Calling .GetIP
	I0130 21:35:03.371654  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.372020  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.372050  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.372368  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:03.374887  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.375337  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.375371  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.375497  664102 provision.go:138] copyHostCerts
	I0130 21:35:03.375533  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:35:03.375569  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 21:35:03.375580  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:35:03.375651  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 21:35:03.375757  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:35:03.375785  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 21:35:03.375793  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:35:03.375828  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 21:35:03.375890  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:35:03.375912  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 21:35:03.375919  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:35:03.375945  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 21:35:03.376004  664102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.multinode-721181 san=[192.168.39.174 192.168.39.174 localhost 127.0.0.1 minikube multinode-721181]
	I0130 21:35:03.675924  664102 provision.go:172] copyRemoteCerts
	I0130 21:35:03.676009  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 21:35:03.676071  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:03.678479  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.678841  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.678866  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.679051  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:03.679227  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.679427  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:03.679561  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:35:03.770784  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 21:35:03.770852  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 21:35:03.792334  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 21:35:03.792405  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0130 21:35:03.812842  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 21:35:03.812892  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 21:35:03.833128  664102 provision.go:86] duration metric: configureAuth took 464.494728ms
	I0130 21:35:03.833159  664102 buildroot.go:189] setting minikube options for container-runtime
	I0130 21:35:03.833438  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:35:03.833615  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:03.836348  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.836792  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:03.836833  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:03.836945  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:03.837153  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.837332  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:03.837436  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:03.837619  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:03.837936  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0130 21:35:03.837951  664102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 21:35:04.156894  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 21:35:04.156922  664102 machine.go:91] provisioned docker machine in 1.063183182s
	I0130 21:35:04.156934  664102 start.go:300] post-start starting for "multinode-721181" (driver="kvm2")
	I0130 21:35:04.156965  664102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 21:35:04.156986  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:04.157380  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 21:35:04.157408  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:04.160241  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.160573  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:04.160608  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.160741  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:04.160942  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:04.161127  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:04.161290  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:35:04.250976  664102 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 21:35:04.254964  664102 command_runner.go:130] > NAME=Buildroot
	I0130 21:35:04.254984  664102 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 21:35:04.254990  664102 command_runner.go:130] > ID=buildroot
	I0130 21:35:04.254999  664102 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 21:35:04.255006  664102 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 21:35:04.255050  664102 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 21:35:04.255068  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 21:35:04.255141  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 21:35:04.255246  664102 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 21:35:04.255258  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /etc/ssl/certs/6477182.pem
	I0130 21:35:04.255376  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 21:35:04.263070  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:35:04.284920  664102 start.go:303] post-start completed in 127.970876ms
	I0130 21:35:04.284943  664102 fix.go:56] fixHost completed within 21.293739301s
	I0130 21:35:04.284965  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:04.287634  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.287979  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:04.288008  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.288240  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:04.288447  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:04.288606  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:04.288770  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:04.288910  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:04.289238  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.174 22 <nil> <nil>}
	I0130 21:35:04.289249  664102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 21:35:04.413731  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706650504.364708777
	
	I0130 21:35:04.413757  664102 fix.go:206] guest clock: 1706650504.364708777
	I0130 21:35:04.413767  664102 fix.go:219] Guest: 2024-01-30 21:35:04.364708777 +0000 UTC Remote: 2024-01-30 21:35:04.284946456 +0000 UTC m=+319.319680089 (delta=79.762321ms)
	I0130 21:35:04.413793  664102 fix.go:190] guest clock delta is within tolerance: 79.762321ms
	I0130 21:35:04.413800  664102 start.go:83] releasing machines lock for "multinode-721181", held for 21.422622267s
	I0130 21:35:04.413829  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:04.414100  664102 main.go:141] libmachine: (multinode-721181) Calling .GetIP
	I0130 21:35:04.416371  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.416676  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:04.416708  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.416822  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:04.417401  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:04.417591  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:35:04.417679  664102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 21:35:04.417734  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:04.417859  664102 ssh_runner.go:195] Run: cat /version.json
	I0130 21:35:04.417892  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:35:04.420117  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.420507  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:04.420538  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.420579  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.420674  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:04.420851  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:04.420988  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:04.421011  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:04.421014  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:04.421197  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:35:04.421192  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:35:04.421376  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:35:04.421517  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:35:04.421655  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:35:04.533207  664102 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 21:35:04.534154  664102 command_runner.go:130] > {"iso_version": "v1.32.1-1703784139-17866", "kicbase_version": "v0.0.42-1703723663-17866", "minikube_version": "v1.32.0", "commit": "eb69424d8f623d7cabea57d4395ce87adf1b5fc3"}
	I0130 21:35:04.534343  664102 ssh_runner.go:195] Run: systemctl --version
	I0130 21:35:04.540064  664102 command_runner.go:130] > systemd 247 (247)
	I0130 21:35:04.540089  664102 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0130 21:35:04.540159  664102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 21:35:04.682989  664102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 21:35:04.688567  664102 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 21:35:04.688693  664102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 21:35:04.688782  664102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 21:35:04.703433  664102 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0130 21:35:04.703724  664102 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 21:35:04.703748  664102 start.go:475] detecting cgroup driver to use...
	I0130 21:35:04.703826  664102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 21:35:04.718858  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 21:35:04.730641  664102 docker.go:217] disabling cri-docker service (if available) ...
	I0130 21:35:04.730719  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 21:35:04.743048  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 21:35:04.754480  664102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 21:35:04.854483  664102 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0130 21:35:04.854569  664102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 21:35:04.969430  664102 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0130 21:35:04.969488  664102 docker.go:233] disabling docker service ...
	I0130 21:35:04.969546  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 21:35:04.982936  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 21:35:04.994422  664102 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0130 21:35:04.994516  664102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 21:35:05.007885  664102 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0130 21:35:05.092810  664102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 21:35:05.104438  664102 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0130 21:35:05.104482  664102 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0130 21:35:05.201039  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 21:35:05.213463  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 21:35:05.229011  664102 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 21:35:05.229431  664102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 21:35:05.229513  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:35:05.238964  664102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 21:35:05.239035  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:35:05.248568  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:35:05.257959  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:35:05.267162  664102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 21:35:05.276707  664102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 21:35:05.285082  664102 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 21:35:05.285234  664102 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 21:35:05.285286  664102 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 21:35:05.298320  664102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 21:35:05.306801  664102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 21:35:05.403715  664102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 21:35:05.552913  664102 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 21:35:05.553012  664102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 21:35:05.557652  664102 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 21:35:05.557676  664102 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 21:35:05.557687  664102 command_runner.go:130] > Device: 16h/22d	Inode: 774         Links: 1
	I0130 21:35:05.557697  664102 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:35:05.557704  664102 command_runner.go:130] > Access: 2024-01-30 21:35:05.491579323 +0000
	I0130 21:35:05.557714  664102 command_runner.go:130] > Modify: 2024-01-30 21:35:05.491579323 +0000
	I0130 21:35:05.557721  664102 command_runner.go:130] > Change: 2024-01-30 21:35:05.491579323 +0000
	I0130 21:35:05.557728  664102 command_runner.go:130] >  Birth: -
	I0130 21:35:05.558013  664102 start.go:543] Will wait 60s for crictl version
	I0130 21:35:05.558065  664102 ssh_runner.go:195] Run: which crictl
	I0130 21:35:05.562797  664102 command_runner.go:130] > /usr/bin/crictl
	I0130 21:35:05.562958  664102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 21:35:05.597652  664102 command_runner.go:130] > Version:  0.1.0
	I0130 21:35:05.597678  664102 command_runner.go:130] > RuntimeName:  cri-o
	I0130 21:35:05.597683  664102 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 21:35:05.597689  664102 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 21:35:05.599058  664102 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 21:35:05.599177  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:35:05.640567  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:35:05.640590  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:35:05.640600  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:35:05.640609  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:35:05.640615  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:35:05.640619  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:35:05.640623  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:35:05.640628  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:35:05.640633  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:35:05.640640  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:35:05.640645  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:35:05.640649  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:35:05.641934  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:35:05.689406  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:35:05.689443  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:35:05.689456  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:35:05.689463  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:35:05.689488  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:35:05.689497  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:35:05.689509  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:35:05.689520  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:35:05.689530  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:35:05.689545  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:35:05.689555  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:35:05.689566  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:35:05.692897  664102 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 21:35:05.694442  664102 main.go:141] libmachine: (multinode-721181) Calling .GetIP
	I0130 21:35:05.697160  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:05.697538  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:35:05.697569  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:35:05.697733  664102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 21:35:05.701994  664102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:35:05.714082  664102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:35:05.714132  664102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:35:05.750099  664102 command_runner.go:130] > {
	I0130 21:35:05.750124  664102 command_runner.go:130] >   "images": [
	I0130 21:35:05.750129  664102 command_runner.go:130] >     {
	I0130 21:35:05.750137  664102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0130 21:35:05.750142  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:05.750164  664102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0130 21:35:05.750170  664102 command_runner.go:130] >       ],
	I0130 21:35:05.750177  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:05.750192  664102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0130 21:35:05.750206  664102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0130 21:35:05.750212  664102 command_runner.go:130] >       ],
	I0130 21:35:05.750217  664102 command_runner.go:130] >       "size": "750414",
	I0130 21:35:05.750222  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:05.750226  664102 command_runner.go:130] >         "value": "65535"
	I0130 21:35:05.750231  664102 command_runner.go:130] >       },
	I0130 21:35:05.750235  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:05.750245  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:05.750250  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:05.750259  664102 command_runner.go:130] >     }
	I0130 21:35:05.750268  664102 command_runner.go:130] >   ]
	I0130 21:35:05.750277  664102 command_runner.go:130] > }
	I0130 21:35:05.751319  664102 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 21:35:05.751372  664102 ssh_runner.go:195] Run: which lz4
	I0130 21:35:05.754950  664102 command_runner.go:130] > /usr/bin/lz4
	I0130 21:35:05.755040  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0130 21:35:05.755111  664102 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 21:35:05.758840  664102 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 21:35:05.759014  664102 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 21:35:05.759046  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 21:35:07.552380  664102 crio.go:444] Took 1.797286 seconds to copy over tarball
	I0130 21:35:07.552451  664102 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 21:35:10.243342  664102 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.690860663s)
	I0130 21:35:10.243368  664102 crio.go:451] Took 2.690962 seconds to extract the tarball
	I0130 21:35:10.243378  664102 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 21:35:10.283684  664102 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 21:35:10.327295  664102 command_runner.go:130] > {
	I0130 21:35:10.327327  664102 command_runner.go:130] >   "images": [
	I0130 21:35:10.327331  664102 command_runner.go:130] >     {
	I0130 21:35:10.327340  664102 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0130 21:35:10.327345  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327352  664102 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0130 21:35:10.327356  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327360  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327373  664102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0130 21:35:10.327384  664102 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0130 21:35:10.327391  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327399  664102 command_runner.go:130] >       "size": "65258016",
	I0130 21:35:10.327407  664102 command_runner.go:130] >       "uid": null,
	I0130 21:35:10.327412  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.327420  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.327427  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.327431  664102 command_runner.go:130] >     },
	I0130 21:35:10.327442  664102 command_runner.go:130] >     {
	I0130 21:35:10.327456  664102 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0130 21:35:10.327467  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327476  664102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0130 21:35:10.327487  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327497  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327511  664102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0130 21:35:10.327528  664102 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0130 21:35:10.327537  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327550  664102 command_runner.go:130] >       "size": "31470524",
	I0130 21:35:10.327558  664102 command_runner.go:130] >       "uid": null,
	I0130 21:35:10.327563  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.327579  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.327587  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.327591  664102 command_runner.go:130] >     },
	I0130 21:35:10.327595  664102 command_runner.go:130] >     {
	I0130 21:35:10.327604  664102 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0130 21:35:10.327610  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327616  664102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0130 21:35:10.327622  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327626  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327636  664102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0130 21:35:10.327644  664102 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0130 21:35:10.327650  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327655  664102 command_runner.go:130] >       "size": "53621675",
	I0130 21:35:10.327661  664102 command_runner.go:130] >       "uid": null,
	I0130 21:35:10.327666  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.327672  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.327676  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.327680  664102 command_runner.go:130] >     },
	I0130 21:35:10.327686  664102 command_runner.go:130] >     {
	I0130 21:35:10.327696  664102 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0130 21:35:10.327700  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327705  664102 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0130 21:35:10.327709  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327713  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327722  664102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0130 21:35:10.327729  664102 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0130 21:35:10.327746  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327754  664102 command_runner.go:130] >       "size": "295456551",
	I0130 21:35:10.327758  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:10.327765  664102 command_runner.go:130] >         "value": "0"
	I0130 21:35:10.327769  664102 command_runner.go:130] >       },
	I0130 21:35:10.327773  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.327777  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.327784  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.327788  664102 command_runner.go:130] >     },
	I0130 21:35:10.327792  664102 command_runner.go:130] >     {
	I0130 21:35:10.327803  664102 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0130 21:35:10.327810  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327816  664102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0130 21:35:10.327820  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327826  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327834  664102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0130 21:35:10.327850  664102 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0130 21:35:10.327858  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327862  664102 command_runner.go:130] >       "size": "127226832",
	I0130 21:35:10.327869  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:10.327873  664102 command_runner.go:130] >         "value": "0"
	I0130 21:35:10.327880  664102 command_runner.go:130] >       },
	I0130 21:35:10.327884  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.327891  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.327895  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.327901  664102 command_runner.go:130] >     },
	I0130 21:35:10.327905  664102 command_runner.go:130] >     {
	I0130 21:35:10.327914  664102 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0130 21:35:10.327924  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.327933  664102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0130 21:35:10.327939  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327944  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.327954  664102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0130 21:35:10.327964  664102 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0130 21:35:10.327970  664102 command_runner.go:130] >       ],
	I0130 21:35:10.327975  664102 command_runner.go:130] >       "size": "123261750",
	I0130 21:35:10.327981  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:10.327985  664102 command_runner.go:130] >         "value": "0"
	I0130 21:35:10.327991  664102 command_runner.go:130] >       },
	I0130 21:35:10.327995  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.328002  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.328007  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.328013  664102 command_runner.go:130] >     },
	I0130 21:35:10.328017  664102 command_runner.go:130] >     {
	I0130 21:35:10.328025  664102 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0130 21:35:10.328032  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.328040  664102 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0130 21:35:10.328048  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328052  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.328073  664102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0130 21:35:10.328083  664102 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0130 21:35:10.328088  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328092  664102 command_runner.go:130] >       "size": "74749335",
	I0130 21:35:10.328096  664102 command_runner.go:130] >       "uid": null,
	I0130 21:35:10.328101  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.328104  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.328110  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.328120  664102 command_runner.go:130] >     },
	I0130 21:35:10.328131  664102 command_runner.go:130] >     {
	I0130 21:35:10.328145  664102 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0130 21:35:10.328153  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.328158  664102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0130 21:35:10.328165  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328169  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.328194  664102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0130 21:35:10.328216  664102 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0130 21:35:10.328227  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328236  664102 command_runner.go:130] >       "size": "61551410",
	I0130 21:35:10.328245  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:10.328254  664102 command_runner.go:130] >         "value": "0"
	I0130 21:35:10.328258  664102 command_runner.go:130] >       },
	I0130 21:35:10.328264  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.328268  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.328275  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.328279  664102 command_runner.go:130] >     },
	I0130 21:35:10.328285  664102 command_runner.go:130] >     {
	I0130 21:35:10.328291  664102 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0130 21:35:10.328302  664102 command_runner.go:130] >       "repoTags": [
	I0130 21:35:10.328343  664102 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0130 21:35:10.328348  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328355  664102 command_runner.go:130] >       "repoDigests": [
	I0130 21:35:10.328365  664102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0130 21:35:10.328379  664102 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0130 21:35:10.328390  664102 command_runner.go:130] >       ],
	I0130 21:35:10.328402  664102 command_runner.go:130] >       "size": "750414",
	I0130 21:35:10.328414  664102 command_runner.go:130] >       "uid": {
	I0130 21:35:10.328425  664102 command_runner.go:130] >         "value": "65535"
	I0130 21:35:10.328436  664102 command_runner.go:130] >       },
	I0130 21:35:10.328444  664102 command_runner.go:130] >       "username": "",
	I0130 21:35:10.328453  664102 command_runner.go:130] >       "spec": null,
	I0130 21:35:10.328461  664102 command_runner.go:130] >       "pinned": false
	I0130 21:35:10.328464  664102 command_runner.go:130] >     }
	I0130 21:35:10.328473  664102 command_runner.go:130] >   ]
	I0130 21:35:10.328484  664102 command_runner.go:130] > }
	I0130 21:35:10.328656  664102 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 21:35:10.328674  664102 cache_images.go:84] Images are preloaded, skipping loading
	I0130 21:35:10.328746  664102 ssh_runner.go:195] Run: crio config
	I0130 21:35:10.380121  664102 command_runner.go:130] ! time="2024-01-30 21:35:10.330652999Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 21:35:10.380227  664102 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 21:35:10.388540  664102 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 21:35:10.388569  664102 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 21:35:10.388576  664102 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 21:35:10.388580  664102 command_runner.go:130] > #
	I0130 21:35:10.388587  664102 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 21:35:10.388593  664102 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 21:35:10.388603  664102 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 21:35:10.388619  664102 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 21:35:10.388629  664102 command_runner.go:130] > # reload'.
	I0130 21:35:10.388640  664102 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 21:35:10.388653  664102 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 21:35:10.388664  664102 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 21:35:10.388673  664102 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 21:35:10.388676  664102 command_runner.go:130] > [crio]
	I0130 21:35:10.388683  664102 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 21:35:10.388692  664102 command_runner.go:130] > # containers images, in this directory.
	I0130 21:35:10.388704  664102 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 21:35:10.388718  664102 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 21:35:10.388728  664102 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 21:35:10.388743  664102 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 21:35:10.388756  664102 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 21:35:10.388766  664102 command_runner.go:130] > storage_driver = "overlay"
	I0130 21:35:10.388781  664102 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 21:35:10.388793  664102 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 21:35:10.388804  664102 command_runner.go:130] > storage_option = [
	I0130 21:35:10.388814  664102 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 21:35:10.388822  664102 command_runner.go:130] > ]
	I0130 21:35:10.388833  664102 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 21:35:10.388845  664102 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 21:35:10.388853  664102 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 21:35:10.388868  664102 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 21:35:10.388880  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 21:35:10.388890  664102 command_runner.go:130] > # always happen on a node reboot
	I0130 21:35:10.388898  664102 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 21:35:10.388910  664102 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 21:35:10.388922  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 21:35:10.388942  664102 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 21:35:10.388962  664102 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 21:35:10.388977  664102 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 21:35:10.388988  664102 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 21:35:10.388993  664102 command_runner.go:130] > # internal_wipe = true
	I0130 21:35:10.389002  664102 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 21:35:10.389008  664102 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 21:35:10.389016  664102 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 21:35:10.389021  664102 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 21:35:10.389029  664102 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 21:35:10.389033  664102 command_runner.go:130] > [crio.api]
	I0130 21:35:10.389041  664102 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 21:35:10.389046  664102 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 21:35:10.389051  664102 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 21:35:10.389058  664102 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 21:35:10.389065  664102 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 21:35:10.389072  664102 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 21:35:10.389076  664102 command_runner.go:130] > # stream_port = "0"
	I0130 21:35:10.389084  664102 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 21:35:10.389091  664102 command_runner.go:130] > # stream_enable_tls = false
	I0130 21:35:10.389100  664102 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 21:35:10.389105  664102 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 21:35:10.389111  664102 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 21:35:10.389118  664102 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 21:35:10.389124  664102 command_runner.go:130] > # minutes.
	I0130 21:35:10.389127  664102 command_runner.go:130] > # stream_tls_cert = ""
	I0130 21:35:10.389133  664102 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 21:35:10.389141  664102 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 21:35:10.389146  664102 command_runner.go:130] > # stream_tls_key = ""
	I0130 21:35:10.389154  664102 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 21:35:10.389160  664102 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 21:35:10.389167  664102 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 21:35:10.389171  664102 command_runner.go:130] > # stream_tls_ca = ""
	I0130 21:35:10.389179  664102 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:35:10.389184  664102 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 21:35:10.389191  664102 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:35:10.389198  664102 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 21:35:10.389220  664102 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 21:35:10.389229  664102 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 21:35:10.389233  664102 command_runner.go:130] > [crio.runtime]
	I0130 21:35:10.389238  664102 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 21:35:10.389244  664102 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 21:35:10.389248  664102 command_runner.go:130] > # "nofile=1024:2048"
	I0130 21:35:10.389254  664102 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 21:35:10.389260  664102 command_runner.go:130] > # default_ulimits = [
	I0130 21:35:10.389263  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389272  664102 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 21:35:10.389278  664102 command_runner.go:130] > # no_pivot = false
	I0130 21:35:10.389283  664102 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 21:35:10.389292  664102 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 21:35:10.389297  664102 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 21:35:10.389304  664102 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 21:35:10.389310  664102 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 21:35:10.389318  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:35:10.389323  664102 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 21:35:10.389333  664102 command_runner.go:130] > # Cgroup setting for conmon
	I0130 21:35:10.389343  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 21:35:10.389347  664102 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 21:35:10.389353  664102 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 21:35:10.389360  664102 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 21:35:10.389367  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:35:10.389372  664102 command_runner.go:130] > conmon_env = [
	I0130 21:35:10.389378  664102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 21:35:10.389382  664102 command_runner.go:130] > ]
	I0130 21:35:10.389387  664102 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 21:35:10.389394  664102 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 21:35:10.389400  664102 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 21:35:10.389406  664102 command_runner.go:130] > # default_env = [
	I0130 21:35:10.389410  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389415  664102 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 21:35:10.389422  664102 command_runner.go:130] > # selinux = false
	I0130 21:35:10.389428  664102 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 21:35:10.389436  664102 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 21:35:10.389446  664102 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 21:35:10.389450  664102 command_runner.go:130] > # seccomp_profile = ""
	I0130 21:35:10.389455  664102 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 21:35:10.389463  664102 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 21:35:10.389491  664102 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 21:35:10.389503  664102 command_runner.go:130] > # which might increase security.
	I0130 21:35:10.389508  664102 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 21:35:10.389516  664102 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 21:35:10.389522  664102 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 21:35:10.389529  664102 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 21:35:10.389535  664102 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 21:35:10.389542  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:35:10.389547  664102 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 21:35:10.389555  664102 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 21:35:10.389560  664102 command_runner.go:130] > # the cgroup blockio controller.
	I0130 21:35:10.389566  664102 command_runner.go:130] > # blockio_config_file = ""
	I0130 21:35:10.389572  664102 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 21:35:10.389576  664102 command_runner.go:130] > # irqbalance daemon.
	I0130 21:35:10.389586  664102 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 21:35:10.389595  664102 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 21:35:10.389600  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:35:10.389607  664102 command_runner.go:130] > # rdt_config_file = ""
	I0130 21:35:10.389612  664102 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 21:35:10.389619  664102 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 21:35:10.389625  664102 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 21:35:10.389631  664102 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 21:35:10.389637  664102 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 21:35:10.389646  664102 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 21:35:10.389650  664102 command_runner.go:130] > # will be added.
	I0130 21:35:10.389654  664102 command_runner.go:130] > # default_capabilities = [
	I0130 21:35:10.389658  664102 command_runner.go:130] > # 	"CHOWN",
	I0130 21:35:10.389663  664102 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 21:35:10.389668  664102 command_runner.go:130] > # 	"FSETID",
	I0130 21:35:10.389674  664102 command_runner.go:130] > # 	"FOWNER",
	I0130 21:35:10.389678  664102 command_runner.go:130] > # 	"SETGID",
	I0130 21:35:10.389682  664102 command_runner.go:130] > # 	"SETUID",
	I0130 21:35:10.389688  664102 command_runner.go:130] > # 	"SETPCAP",
	I0130 21:35:10.389695  664102 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 21:35:10.389699  664102 command_runner.go:130] > # 	"KILL",
	I0130 21:35:10.389707  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389718  664102 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 21:35:10.389731  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:35:10.389737  664102 command_runner.go:130] > # default_sysctls = [
	I0130 21:35:10.389745  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389753  664102 command_runner.go:130] > # List of devices on the host that a
	I0130 21:35:10.389766  664102 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 21:35:10.389775  664102 command_runner.go:130] > # allowed_devices = [
	I0130 21:35:10.389781  664102 command_runner.go:130] > # 	"/dev/fuse",
	I0130 21:35:10.389790  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389820  664102 command_runner.go:130] > # List of additional devices. specified as
	I0130 21:35:10.389838  664102 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 21:35:10.389843  664102 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 21:35:10.389873  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:35:10.389880  664102 command_runner.go:130] > # additional_devices = [
	I0130 21:35:10.389887  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389894  664102 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 21:35:10.389899  664102 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 21:35:10.389905  664102 command_runner.go:130] > # 	"/etc/cdi",
	I0130 21:35:10.389909  664102 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 21:35:10.389915  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389922  664102 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 21:35:10.389930  664102 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 21:35:10.389934  664102 command_runner.go:130] > # Defaults to false.
	I0130 21:35:10.389940  664102 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 21:35:10.389947  664102 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 21:35:10.389955  664102 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 21:35:10.389959  664102 command_runner.go:130] > # hooks_dir = [
	I0130 21:35:10.389964  664102 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 21:35:10.389970  664102 command_runner.go:130] > # ]
	I0130 21:35:10.389978  664102 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 21:35:10.389987  664102 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 21:35:10.389992  664102 command_runner.go:130] > # its default mounts from the following two files:
	I0130 21:35:10.390000  664102 command_runner.go:130] > #
	I0130 21:35:10.390008  664102 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 21:35:10.390014  664102 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 21:35:10.390022  664102 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 21:35:10.390026  664102 command_runner.go:130] > #
	I0130 21:35:10.390034  664102 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 21:35:10.390040  664102 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 21:35:10.390048  664102 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 21:35:10.390053  664102 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 21:35:10.390059  664102 command_runner.go:130] > #
	I0130 21:35:10.390064  664102 command_runner.go:130] > # default_mounts_file = ""
	I0130 21:35:10.390071  664102 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 21:35:10.390078  664102 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 21:35:10.390084  664102 command_runner.go:130] > pids_limit = 1024
	I0130 21:35:10.390090  664102 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 21:35:10.390098  664102 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 21:35:10.390104  664102 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 21:35:10.390116  664102 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 21:35:10.390125  664102 command_runner.go:130] > # log_size_max = -1
	I0130 21:35:10.390135  664102 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 21:35:10.390139  664102 command_runner.go:130] > # log_to_journald = false
	I0130 21:35:10.390147  664102 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 21:35:10.390152  664102 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 21:35:10.390160  664102 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 21:35:10.390165  664102 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 21:35:10.390173  664102 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 21:35:10.390177  664102 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 21:35:10.390183  664102 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 21:35:10.390189  664102 command_runner.go:130] > # read_only = false
	I0130 21:35:10.390195  664102 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 21:35:10.390203  664102 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 21:35:10.390208  664102 command_runner.go:130] > # live configuration reload.
	I0130 21:35:10.390214  664102 command_runner.go:130] > # log_level = "info"
	I0130 21:35:10.390220  664102 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 21:35:10.390229  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:35:10.390233  664102 command_runner.go:130] > # log_filter = ""
	I0130 21:35:10.390244  664102 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 21:35:10.390250  664102 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 21:35:10.390254  664102 command_runner.go:130] > # separated by comma.
	I0130 21:35:10.390260  664102 command_runner.go:130] > # uid_mappings = ""
	I0130 21:35:10.390266  664102 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 21:35:10.390274  664102 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 21:35:10.390278  664102 command_runner.go:130] > # separated by comma.
	I0130 21:35:10.390285  664102 command_runner.go:130] > # gid_mappings = ""
	I0130 21:35:10.390291  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 21:35:10.390299  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:35:10.390306  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:35:10.390313  664102 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 21:35:10.390319  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 21:35:10.390327  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:35:10.390334  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:35:10.390340  664102 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 21:35:10.390346  664102 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 21:35:10.390354  664102 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 21:35:10.390362  664102 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 21:35:10.390369  664102 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 21:35:10.390374  664102 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 21:35:10.390385  664102 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 21:35:10.390390  664102 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 21:35:10.390398  664102 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 21:35:10.390402  664102 command_runner.go:130] > drop_infra_ctr = false
	I0130 21:35:10.390411  664102 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 21:35:10.390416  664102 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 21:35:10.390423  664102 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 21:35:10.390429  664102 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 21:35:10.390435  664102 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 21:35:10.390442  664102 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 21:35:10.390447  664102 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 21:35:10.390454  664102 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 21:35:10.390461  664102 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 21:35:10.390467  664102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 21:35:10.390478  664102 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 21:35:10.390489  664102 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 21:35:10.390500  664102 command_runner.go:130] > # default_runtime = "runc"
	I0130 21:35:10.390505  664102 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 21:35:10.390513  664102 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 21:35:10.390524  664102 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 21:35:10.390532  664102 command_runner.go:130] > # creation as a file is not desired either.
	I0130 21:35:10.390540  664102 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 21:35:10.390547  664102 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 21:35:10.390552  664102 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 21:35:10.390556  664102 command_runner.go:130] > # ]
	I0130 21:35:10.390562  664102 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 21:35:10.390571  664102 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 21:35:10.390583  664102 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 21:35:10.390591  664102 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 21:35:10.390595  664102 command_runner.go:130] > #
	I0130 21:35:10.390600  664102 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 21:35:10.390607  664102 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 21:35:10.390611  664102 command_runner.go:130] > #  runtime_type = "oci"
	I0130 21:35:10.390621  664102 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 21:35:10.390626  664102 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 21:35:10.390631  664102 command_runner.go:130] > #  allowed_annotations = []
	I0130 21:35:10.390635  664102 command_runner.go:130] > # Where:
	I0130 21:35:10.390643  664102 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 21:35:10.390649  664102 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 21:35:10.390658  664102 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 21:35:10.390664  664102 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 21:35:10.390670  664102 command_runner.go:130] > #   in $PATH.
	I0130 21:35:10.390676  664102 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 21:35:10.390683  664102 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 21:35:10.390689  664102 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 21:35:10.390694  664102 command_runner.go:130] > #   state.
	I0130 21:35:10.390704  664102 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 21:35:10.390717  664102 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 21:35:10.390730  664102 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 21:35:10.390743  664102 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 21:35:10.390755  664102 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 21:35:10.390775  664102 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 21:35:10.390785  664102 command_runner.go:130] > #   The currently recognized values are:
	I0130 21:35:10.390795  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 21:35:10.390808  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 21:35:10.390817  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 21:35:10.390823  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 21:35:10.390832  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 21:35:10.390841  664102 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 21:35:10.390847  664102 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 21:35:10.390855  664102 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 21:35:10.390862  664102 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 21:35:10.390867  664102 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 21:35:10.390873  664102 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 21:35:10.390878  664102 command_runner.go:130] > runtime_type = "oci"
	I0130 21:35:10.390883  664102 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 21:35:10.390886  664102 command_runner.go:130] > runtime_config_path = ""
	I0130 21:35:10.390891  664102 command_runner.go:130] > monitor_path = ""
	I0130 21:35:10.390895  664102 command_runner.go:130] > monitor_cgroup = ""
	I0130 21:35:10.390903  664102 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 21:35:10.390912  664102 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 21:35:10.390919  664102 command_runner.go:130] > # running containers
	I0130 21:35:10.390923  664102 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 21:35:10.390931  664102 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 21:35:10.390978  664102 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 21:35:10.390991  664102 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 21:35:10.390996  664102 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 21:35:10.391001  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 21:35:10.391005  664102 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 21:35:10.391012  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 21:35:10.391038  664102 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 21:35:10.391050  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 21:35:10.391056  664102 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 21:35:10.391064  664102 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 21:35:10.391070  664102 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 21:35:10.391080  664102 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 21:35:10.391087  664102 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 21:35:10.391100  664102 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 21:35:10.391111  664102 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 21:35:10.391121  664102 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 21:35:10.391127  664102 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 21:35:10.391134  664102 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 21:35:10.391140  664102 command_runner.go:130] > # Example:
	I0130 21:35:10.391145  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 21:35:10.391152  664102 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 21:35:10.391157  664102 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 21:35:10.391164  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 21:35:10.391168  664102 command_runner.go:130] > # cpuset = 0
	I0130 21:35:10.391174  664102 command_runner.go:130] > # cpushares = "0-1"
	I0130 21:35:10.391178  664102 command_runner.go:130] > # Where:
	I0130 21:35:10.391185  664102 command_runner.go:130] > # The workload name is workload-type.
	I0130 21:35:10.391192  664102 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 21:35:10.391200  664102 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 21:35:10.391205  664102 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 21:35:10.391215  664102 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 21:35:10.391227  664102 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 21:35:10.391230  664102 command_runner.go:130] > # 
	I0130 21:35:10.391239  664102 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 21:35:10.391242  664102 command_runner.go:130] > #
	I0130 21:35:10.391251  664102 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 21:35:10.391257  664102 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 21:35:10.391265  664102 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 21:35:10.391271  664102 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 21:35:10.391279  664102 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 21:35:10.391283  664102 command_runner.go:130] > [crio.image]
	I0130 21:35:10.391291  664102 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 21:35:10.391296  664102 command_runner.go:130] > # default_transport = "docker://"
	I0130 21:35:10.391304  664102 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 21:35:10.391311  664102 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:35:10.391317  664102 command_runner.go:130] > # global_auth_file = ""
	I0130 21:35:10.391322  664102 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 21:35:10.391327  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:35:10.391334  664102 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 21:35:10.391346  664102 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 21:35:10.391354  664102 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:35:10.391359  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:35:10.391364  664102 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 21:35:10.391370  664102 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 21:35:10.391378  664102 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 21:35:10.391384  664102 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 21:35:10.391392  664102 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 21:35:10.391397  664102 command_runner.go:130] > # pause_command = "/pause"
	I0130 21:35:10.391405  664102 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 21:35:10.391411  664102 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 21:35:10.391417  664102 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 21:35:10.391422  664102 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 21:35:10.391431  664102 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 21:35:10.391435  664102 command_runner.go:130] > # signature_policy = ""
	I0130 21:35:10.391440  664102 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 21:35:10.391446  664102 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 21:35:10.391450  664102 command_runner.go:130] > # changing them here.
	I0130 21:35:10.391457  664102 command_runner.go:130] > # insecure_registries = [
	I0130 21:35:10.391460  664102 command_runner.go:130] > # ]
	I0130 21:35:10.391467  664102 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 21:35:10.391472  664102 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 21:35:10.391476  664102 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 21:35:10.391480  664102 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 21:35:10.391484  664102 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 21:35:10.391490  664102 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 21:35:10.391498  664102 command_runner.go:130] > # CNI plugins.
	I0130 21:35:10.391504  664102 command_runner.go:130] > [crio.network]
	I0130 21:35:10.391510  664102 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 21:35:10.391518  664102 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 21:35:10.391522  664102 command_runner.go:130] > # cni_default_network = ""
	I0130 21:35:10.391528  664102 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 21:35:10.391536  664102 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 21:35:10.391541  664102 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 21:35:10.391547  664102 command_runner.go:130] > # plugin_dirs = [
	I0130 21:35:10.391551  664102 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 21:35:10.391557  664102 command_runner.go:130] > # ]
	I0130 21:35:10.391565  664102 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 21:35:10.391571  664102 command_runner.go:130] > [crio.metrics]
	I0130 21:35:10.391576  664102 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 21:35:10.391582  664102 command_runner.go:130] > enable_metrics = true
	I0130 21:35:10.391587  664102 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 21:35:10.391593  664102 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 21:35:10.391600  664102 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 21:35:10.391608  664102 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 21:35:10.391614  664102 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 21:35:10.391620  664102 command_runner.go:130] > # metrics_collectors = [
	I0130 21:35:10.391624  664102 command_runner.go:130] > # 	"operations",
	I0130 21:35:10.391629  664102 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 21:35:10.391634  664102 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 21:35:10.391640  664102 command_runner.go:130] > # 	"operations_errors",
	I0130 21:35:10.391647  664102 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 21:35:10.391653  664102 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 21:35:10.391661  664102 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 21:35:10.391668  664102 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 21:35:10.391674  664102 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 21:35:10.391679  664102 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 21:35:10.391683  664102 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 21:35:10.391688  664102 command_runner.go:130] > # 	"containers_oom_total",
	I0130 21:35:10.391692  664102 command_runner.go:130] > # 	"containers_oom",
	I0130 21:35:10.391697  664102 command_runner.go:130] > # 	"processes_defunct",
	I0130 21:35:10.391702  664102 command_runner.go:130] > # 	"operations_total",
	I0130 21:35:10.391712  664102 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 21:35:10.391721  664102 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 21:35:10.391729  664102 command_runner.go:130] > # 	"operations_errors_total",
	I0130 21:35:10.391740  664102 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 21:35:10.391747  664102 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 21:35:10.391757  664102 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 21:35:10.391764  664102 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 21:35:10.391774  664102 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 21:35:10.391783  664102 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 21:35:10.391789  664102 command_runner.go:130] > # ]
	I0130 21:35:10.391802  664102 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 21:35:10.391812  664102 command_runner.go:130] > # metrics_port = 9090
	I0130 21:35:10.391820  664102 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 21:35:10.391829  664102 command_runner.go:130] > # metrics_socket = ""
	I0130 21:35:10.391835  664102 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 21:35:10.391843  664102 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 21:35:10.391849  664102 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 21:35:10.391854  664102 command_runner.go:130] > # certificate on any modification event.
	I0130 21:35:10.391861  664102 command_runner.go:130] > # metrics_cert = ""
	I0130 21:35:10.391866  664102 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 21:35:10.391873  664102 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 21:35:10.391880  664102 command_runner.go:130] > # metrics_key = ""
	I0130 21:35:10.391888  664102 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 21:35:10.391892  664102 command_runner.go:130] > [crio.tracing]
	I0130 21:35:10.391900  664102 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 21:35:10.391904  664102 command_runner.go:130] > # enable_tracing = false
	I0130 21:35:10.391912  664102 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 21:35:10.391917  664102 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 21:35:10.391925  664102 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 21:35:10.391930  664102 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 21:35:10.391938  664102 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 21:35:10.391942  664102 command_runner.go:130] > [crio.stats]
	I0130 21:35:10.391950  664102 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 21:35:10.391956  664102 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 21:35:10.391962  664102 command_runner.go:130] > # stats_collection_period = 0
	I0130 21:35:10.392047  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:35:10.392061  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:35:10.392081  664102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 21:35:10.392102  664102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.174 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-721181 NodeName:multinode-721181 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.174 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 21:35:10.392276  664102 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.174
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-721181"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.174
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 21:35:10.392348  664102 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-721181 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.174
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 21:35:10.392400  664102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 21:35:10.400954  664102 command_runner.go:130] > kubeadm
	I0130 21:35:10.400969  664102 command_runner.go:130] > kubectl
	I0130 21:35:10.400973  664102 command_runner.go:130] > kubelet
	I0130 21:35:10.401208  664102 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 21:35:10.401285  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 21:35:10.409364  664102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0130 21:35:10.424534  664102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 21:35:10.439417  664102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0130 21:35:10.454728  664102 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0130 21:35:10.458132  664102 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.174	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 21:35:10.469653  664102 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181 for IP: 192.168.39.174
	I0130 21:35:10.469681  664102 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:35:10.469858  664102 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 21:35:10.469924  664102 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 21:35:10.470067  664102 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key
	I0130 21:35:10.470151  664102 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/apiserver.key.4baccf75
	I0130 21:35:10.470239  664102 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/proxy-client.key
	I0130 21:35:10.470264  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0130 21:35:10.470293  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0130 21:35:10.470314  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0130 21:35:10.470334  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0130 21:35:10.470354  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 21:35:10.470383  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 21:35:10.470415  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 21:35:10.470439  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 21:35:10.470521  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 21:35:10.470569  664102 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 21:35:10.470591  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 21:35:10.470627  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 21:35:10.470666  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 21:35:10.470712  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 21:35:10.470777  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:35:10.470832  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /usr/share/ca-certificates/6477182.pem
	I0130 21:35:10.470856  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:35:10.470874  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem -> /usr/share/ca-certificates/647718.pem
	I0130 21:35:10.471624  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 21:35:10.494115  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 21:35:10.516325  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 21:35:10.538057  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 21:35:10.560020  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 21:35:10.582009  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 21:35:10.604212  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 21:35:10.626104  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 21:35:10.648416  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 21:35:10.670462  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 21:35:10.692172  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 21:35:10.714764  664102 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 21:35:10.730173  664102 ssh_runner.go:195] Run: openssl version
	I0130 21:35:10.735221  664102 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 21:35:10.735423  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 21:35:10.744423  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 21:35:10.748478  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:35:10.748663  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:35:10.748709  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 21:35:10.753714  664102 command_runner.go:130] > 3ec20f2e
	I0130 21:35:10.754084  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 21:35:10.763263  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 21:35:10.772154  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:35:10.776319  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:35:10.776428  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:35:10.776480  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:35:10.781568  664102 command_runner.go:130] > b5213941
	I0130 21:35:10.781630  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 21:35:10.790773  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 21:35:10.800483  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 21:35:10.804579  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:35:10.804710  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:35:10.804757  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 21:35:10.810027  664102 command_runner.go:130] > 51391683
	I0130 21:35:10.810082  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 21:35:10.819853  664102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 21:35:10.824196  664102 command_runner.go:130] > ca.crt
	I0130 21:35:10.824214  664102 command_runner.go:130] > ca.key
	I0130 21:35:10.824222  664102 command_runner.go:130] > healthcheck-client.crt
	I0130 21:35:10.824234  664102 command_runner.go:130] > healthcheck-client.key
	I0130 21:35:10.824241  664102 command_runner.go:130] > peer.crt
	I0130 21:35:10.824247  664102 command_runner.go:130] > peer.key
	I0130 21:35:10.824253  664102 command_runner.go:130] > server.crt
	I0130 21:35:10.824258  664102 command_runner.go:130] > server.key
	I0130 21:35:10.824362  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 21:35:10.830281  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.830490  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 21:35:10.835745  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.836145  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 21:35:10.841571  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.841634  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 21:35:10.846885  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.846942  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 21:35:10.852274  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.852314  664102 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 21:35:10.857329  664102 command_runner.go:130] > Certificate will not expire
	I0130 21:35:10.857631  664102 kubeadm.go:404] StartCluster: {Name:multinode-721181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Di
sableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:35:10.857735  664102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 21:35:10.857781  664102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 21:35:10.894333  664102 cri.go:89] found id: ""
	I0130 21:35:10.894422  664102 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 21:35:10.903498  664102 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0130 21:35:10.903527  664102 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0130 21:35:10.903538  664102 command_runner.go:130] > /var/lib/minikube/etcd:
	I0130 21:35:10.903544  664102 command_runner.go:130] > member
	I0130 21:35:10.903602  664102 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 21:35:10.903638  664102 kubeadm.go:636] restartCluster start
	I0130 21:35:10.903693  664102 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 21:35:10.912090  664102 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:10.912717  664102 kubeconfig.go:92] found "multinode-721181" server: "https://192.168.39.174:8443"
	I0130 21:35:10.913208  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:35:10.913529  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:35:10.914359  664102 cert_rotation.go:137] Starting client certificate rotation controller
	I0130 21:35:10.914735  664102 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 21:35:10.922446  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:10.922495  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:10.933571  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:11.423317  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:11.423492  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:11.435936  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:11.922491  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:11.922580  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:11.933131  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:12.422663  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:12.422774  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:12.433670  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:12.923214  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:12.923353  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:12.934058  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:13.423245  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:13.423335  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:13.435347  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:13.922795  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:13.922898  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:13.933443  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:14.422978  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:14.423078  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:14.433838  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:14.923485  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:14.923569  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:14.934182  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:15.423086  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:15.423180  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:15.434123  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:15.922667  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:15.922792  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:15.933651  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:16.423286  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:16.423443  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:16.434799  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:16.923374  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:16.923495  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:16.934191  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:17.422661  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:17.422745  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:17.433183  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:17.922736  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:17.922840  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:17.933561  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:18.422528  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:18.422622  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:18.433637  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:18.923268  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:18.923365  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:18.934130  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:19.422652  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:19.422742  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:19.434406  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:19.922818  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:19.922913  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:19.935421  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:20.423100  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:20.423196  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:20.434593  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:20.923422  664102 api_server.go:166] Checking apiserver status ...
	I0130 21:35:20.923510  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 21:35:20.933975  664102 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 21:35:20.934001  664102 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 21:35:20.934029  664102 kubeadm.go:1135] stopping kube-system containers ...
	I0130 21:35:20.934052  664102 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 21:35:20.934113  664102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 21:35:20.971013  664102 cri.go:89] found id: ""
	I0130 21:35:20.971076  664102 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 21:35:20.985095  664102 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 21:35:20.992814  664102 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0130 21:35:20.992835  664102 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0130 21:35:20.992843  664102 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0130 21:35:20.992850  664102 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 21:35:20.992882  664102 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 21:35:20.992921  664102 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 21:35:21.000852  664102 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 21:35:21.000872  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:21.103504  664102 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 21:35:21.103873  664102 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0130 21:35:21.104368  664102 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0130 21:35:21.104817  664102 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 21:35:21.105472  664102 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0130 21:35:21.105949  664102 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0130 21:35:21.106644  664102 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0130 21:35:21.107135  664102 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0130 21:35:21.107546  664102 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0130 21:35:21.108014  664102 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 21:35:21.108454  664102 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 21:35:21.109049  664102 command_runner.go:130] > [certs] Using the existing "sa" key
	I0130 21:35:21.110355  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:21.159456  664102 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 21:35:21.288921  664102 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 21:35:21.451777  664102 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 21:35:21.858788  664102 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 21:35:21.956550  664102 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 21:35:21.959196  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:22.139753  664102 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 21:35:22.139782  664102 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 21:35:22.139788  664102 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 21:35:22.139820  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:22.211380  664102 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 21:35:22.211411  664102 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 21:35:22.216281  664102 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 21:35:22.218382  664102 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 21:35:22.221460  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:22.281510  664102 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 21:35:22.284961  664102 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:35:22.285061  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:22.785841  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:23.285527  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:23.785156  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:24.285509  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:24.785211  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:24.811703  664102 command_runner.go:130] > 1099
	I0130 21:35:24.811964  664102 api_server.go:72] duration metric: took 2.527004512s to wait for apiserver process to appear ...
	I0130 21:35:24.811991  664102 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:35:24.812015  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:28.576675  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 21:35:28.576707  664102 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 21:35:28.576722  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:28.628333  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 21:35:28.628361  664102 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 21:35:28.812665  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:28.818320  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:35:28.818353  664102 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:35:29.313067  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:29.317990  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:35:29.318026  664102 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:35:29.812316  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:29.827087  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 21:35:29.827113  664102 api_server.go:103] status: https://192.168.39.174:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 21:35:30.312889  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:30.318059  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0130 21:35:30.318236  664102 round_trippers.go:463] GET https://192.168.39.174:8443/version
	I0130 21:35:30.318248  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:30.318257  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:30.318268  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:30.327217  664102 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0130 21:35:30.327248  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:30.327259  664102 round_trippers.go:580]     Audit-Id: b3de989d-8a5b-4a05-a8ef-4a9177d684d5
	I0130 21:35:30.327270  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:30.327279  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:30.327288  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:30.327298  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:30.327307  664102 round_trippers.go:580]     Content-Length: 264
	I0130 21:35:30.327323  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:30 GMT
	I0130 21:35:30.327381  664102 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0130 21:35:30.327477  664102 api_server.go:141] control plane version: v1.28.4
	I0130 21:35:30.327502  664102 api_server.go:131] duration metric: took 5.515502396s to wait for apiserver health ...
	I0130 21:35:30.327516  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:35:30.327528  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:35:30.329276  664102 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0130 21:35:30.330968  664102 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 21:35:30.344248  664102 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 21:35:30.344275  664102 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 21:35:30.344285  664102 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 21:35:30.344310  664102 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:35:30.344343  664102 command_runner.go:130] > Access: 2024-01-30 21:34:55.719579323 +0000
	I0130 21:35:30.344355  664102 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 21:35:30.344363  664102 command_runner.go:130] > Change: 2024-01-30 21:34:53.860579323 +0000
	I0130 21:35:30.344372  664102 command_runner.go:130] >  Birth: -
	I0130 21:35:30.344442  664102 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 21:35:30.344458  664102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 21:35:30.386986  664102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 21:35:31.543606  664102 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:35:31.543629  664102 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:35:31.543635  664102 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 21:35:31.543640  664102 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 21:35:31.543859  664102 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.156840114s)
	I0130 21:35:31.543890  664102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:35:31.543994  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:31.544016  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.544028  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.544042  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.547823  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:31.547844  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.547855  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.547863  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.547872  664102 round_trippers.go:580]     Audit-Id: d7a541bb-3e48-4322-af57-719e155d8582
	I0130 21:35:31.547881  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.547890  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.547906  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.549595  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82069 chars]
	I0130 21:35:31.553678  664102 system_pods.go:59] 12 kube-system pods found
	I0130 21:35:31.553731  664102 system_pods.go:61] "coredns-5dd5756b68-2jstl" [9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 21:35:31.553744  664102 system_pods.go:61] "etcd-multinode-721181" [83f20d3f-5604-4e3c-a7c8-b38a9b20c035] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 21:35:31.553761  664102 system_pods.go:61] "kindnet-8thzp" [c1610c3b-8a9d-47d4-a204-75648b6b61ab] Running
	I0130 21:35:31.553768  664102 system_pods.go:61] "kindnet-qxwqk" [a733f539-7a0f-46d9-b868-9b0d80001474] Running
	I0130 21:35:31.553778  664102 system_pods.go:61] "kindnet-zt7wg" [49dc74c8-c0dc-4421-99f2-b40bcf3429ff] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 21:35:31.553788  664102 system_pods.go:61] "kube-apiserver-multinode-721181" [fbcc53e1-4691-4473-b215-2cb6daeaf321] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 21:35:31.553801  664102 system_pods.go:61] "kube-controller-manager-multinode-721181" [de8beec4-5cad-4405-b856-7475b95559ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 21:35:31.553810  664102 system_pods.go:61] "kube-proxy-49rq4" [63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3] Running
	I0130 21:35:31.553821  664102 system_pods.go:61] "kube-proxy-lwg96" [68cc319c-45c4-4a65-9712-d4e419acd7d6] Running
	I0130 21:35:31.553828  664102 system_pods.go:61] "kube-proxy-s9pwd" [e6594579-7b2f-4ab5-b7f2-0b176bad1705] Running
	I0130 21:35:31.553837  664102 system_pods.go:61] "kube-scheduler-multinode-721181" [d7e4675b-0e8c-46de-9b39-435d25004a88] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 21:35:31.553848  664102 system_pods.go:61] "storage-provisioner" [5f9b77ce-6169-4580-ae1c-04759bfcf2d7] Running
	I0130 21:35:31.553858  664102 system_pods.go:74] duration metric: took 9.959368ms to wait for pod list to return data ...
	I0130 21:35:31.553870  664102 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:35:31.553929  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0130 21:35:31.553944  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.553955  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.553966  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.556651  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:31.556667  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.556674  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.556679  664102 round_trippers.go:580]     Audit-Id: 74a4baa7-6d1a-4d2e-bbf0-7e7ae023934b
	I0130 21:35:31.556684  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.556688  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.556693  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.556698  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.557100  664102 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"812"},"items":[{"metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16355 chars]
	I0130 21:35:31.558292  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:31.558360  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:31.558375  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:31.558383  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:31.558392  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:31.558398  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:31.558415  664102 node_conditions.go:105] duration metric: took 4.53475ms to run NodePressure ...
	I0130 21:35:31.558434  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 21:35:31.782507  664102 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0130 21:35:31.782541  664102 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0130 21:35:31.782567  664102 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 21:35:31.782672  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0130 21:35:31.782685  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.782697  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.782710  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.785810  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:31.785832  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.785843  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.785860  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.785868  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.785877  664102 round_trippers.go:580]     Audit-Id: 6e5654ab-b8bf-4896-a09a-542b8f8c4a37
	I0130 21:35:31.785886  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.785896  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.786952  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"814"},"items":[{"metadata":{"name":"etcd-multinode-721181","namespace":"kube-system","uid":"83f20d3f-5604-4e3c-a7c8-b38a9b20c035","resourceVersion":"769","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.mirror":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.seen":"2024-01-30T21:24:57.236042745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0130 21:35:31.787946  664102 kubeadm.go:787] kubelet initialised
	I0130 21:35:31.787966  664102 kubeadm.go:788] duration metric: took 5.391591ms waiting for restarted kubelet to initialise ...
	I0130 21:35:31.787974  664102 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:35:31.788033  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:31.788041  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.788049  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.788071  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.793205  664102 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 21:35:31.793223  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.793231  664102 round_trippers.go:580]     Audit-Id: bfb44b02-0954-4727-8098-f556a9fda195
	I0130 21:35:31.793239  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.793248  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.793256  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.793265  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.793272  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.794588  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"814"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82069 chars]
	I0130 21:35:31.797292  664102 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:31.797380  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:31.797388  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.797396  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.797401  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.799521  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:31.799541  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.799547  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.799553  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.799558  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.799563  664102 round_trippers.go:580]     Audit-Id: ebf78ba3-1db6-461e-b894-f5b80f82875b
	I0130 21:35:31.799568  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.799572  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.799742  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:31.800265  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:31.800284  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.800295  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.800304  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.802142  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.802158  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.802164  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.802169  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.802180  664102 round_trippers.go:580]     Audit-Id: 171c10e1-18ca-409c-8d03-6c047df772ad
	I0130 21:35:31.802185  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.802193  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.802207  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.802377  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:31.802711  664102 pod_ready.go:97] node "multinode-721181" hosting pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.802733  664102 pod_ready.go:81] duration metric: took 5.416811ms waiting for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:31.802745  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.802759  664102 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:31.802824  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-721181
	I0130 21:35:31.802835  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.802846  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.802859  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.804457  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.804475  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.804484  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.804492  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.804499  664102 round_trippers.go:580]     Audit-Id: b1ea3e83-b827-4607-a6d7-52945a3e8b51
	I0130 21:35:31.804506  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.804518  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.804527  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.804642  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-721181","namespace":"kube-system","uid":"83f20d3f-5604-4e3c-a7c8-b38a9b20c035","resourceVersion":"769","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.mirror":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.seen":"2024-01-30T21:24:57.236042745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0130 21:35:31.804980  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:31.804993  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.804999  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.805009  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.806583  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.806602  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.806608  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.806613  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.806618  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.806623  664102 round_trippers.go:580]     Audit-Id: d5d249c5-479e-4c41-b807-655443fcf326
	I0130 21:35:31.806630  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.806638  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.806804  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:31.807067  664102 pod_ready.go:97] node "multinode-721181" hosting pod "etcd-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.807083  664102 pod_ready.go:81] duration metric: took 4.311451ms waiting for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:31.807091  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "etcd-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.807106  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:31.807152  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-721181
	I0130 21:35:31.807160  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.807166  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.807172  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.809056  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.809073  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.809083  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.809091  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.809099  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.809107  664102 round_trippers.go:580]     Audit-Id: e5cc2405-07b0-49b0-a657-2d737890df9f
	I0130 21:35:31.809112  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.809120  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.809421  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-721181","namespace":"kube-system","uid":"fbcc53e1-4691-4473-b215-2cb6daeaf321","resourceVersion":"762","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.mirror":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.seen":"2024-01-30T21:24:57.236043778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0130 21:35:31.809768  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:31.809780  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.809787  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.809792  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.811532  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.811545  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.811556  664102 round_trippers.go:580]     Audit-Id: f8ff995d-87ec-46d0-8d4e-a7bc015f672e
	I0130 21:35:31.811561  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.811566  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.811571  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.811576  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.811581  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.811703  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:31.811989  664102 pod_ready.go:97] node "multinode-721181" hosting pod "kube-apiserver-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.812009  664102 pod_ready.go:81] duration metric: took 4.892281ms waiting for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:31.812017  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "kube-apiserver-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.812025  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:31.812072  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-721181
	I0130 21:35:31.812080  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.812086  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.812091  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.813758  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:31.813775  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.813781  664102 round_trippers.go:580]     Audit-Id: e20accaa-0589-4484-a44a-aed5596a6bdd
	I0130 21:35:31.813787  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.813792  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.813796  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.813801  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.813809  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.814132  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-721181","namespace":"kube-system","uid":"de8beec4-5cad-4405-b856-7475b95559ba","resourceVersion":"759","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.mirror":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.seen":"2024-01-30T21:24:57.236037857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0130 21:35:31.944865  664102 request.go:629] Waited for 130.362032ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:31.944946  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:31.944969  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:31.944982  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:31.944992  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:31.948421  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:31.948442  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:31.948449  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:31.948454  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:31.948459  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:31.948465  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:31 GMT
	I0130 21:35:31.948470  664102 round_trippers.go:580]     Audit-Id: 78f476ad-a2dd-4178-92aa-c4e8dee9c2bc
	I0130 21:35:31.948475  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:31.948824  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:31.949193  664102 pod_ready.go:97] node "multinode-721181" hosting pod "kube-controller-manager-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.949214  664102 pod_ready.go:81] duration metric: took 137.182208ms waiting for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:31.949226  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "kube-controller-manager-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:31.949241  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:32.144711  664102 request.go:629] Waited for 195.372466ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:35:32.144787  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:35:32.144795  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:32.144804  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:32.144811  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:32.147376  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:32.147394  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:32.147401  664102 round_trippers.go:580]     Audit-Id: faf7ec56-fd19-478b-8bac-37320a88536a
	I0130 21:35:32.147407  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:32.147418  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:32.147430  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:32.147441  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:32.147450  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:32 GMT
	I0130 21:35:32.147616  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-49rq4","generateName":"kube-proxy-","namespace":"kube-system","uid":"63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3","resourceVersion":"812","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 21:35:32.344456  664102 request.go:629] Waited for 196.367415ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:32.344524  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:32.344529  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:32.344537  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:32.344543  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:32.347156  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:32.347177  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:32.347184  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:32.347190  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:32 GMT
	I0130 21:35:32.347195  664102 round_trippers.go:580]     Audit-Id: 2a1068c6-bfc7-42cb-9906-67faa08dcab0
	I0130 21:35:32.347200  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:32.347208  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:32.347219  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:32.347370  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:32.347764  664102 pod_ready.go:97] node "multinode-721181" hosting pod "kube-proxy-49rq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:32.347790  664102 pod_ready.go:81] duration metric: took 398.53932ms waiting for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:32.347800  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "kube-proxy-49rq4" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:32.347808  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:32.544663  664102 request.go:629] Waited for 196.769672ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:35:32.544752  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:35:32.544760  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:32.544772  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:32.544785  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:32.547328  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:32.547356  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:32.547366  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:32.547375  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:32 GMT
	I0130 21:35:32.547387  664102 round_trippers.go:580]     Audit-Id: 9078abd3-3582-44da-a14c-ac69d36c28ac
	I0130 21:35:32.547395  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:32.547403  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:32.547420  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:32.547609  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwg96","generateName":"kube-proxy-","namespace":"kube-system","uid":"68cc319c-45c4-4a65-9712-d4e419acd7d6","resourceVersion":"681","creationTimestamp":"2024-01-30T21:26:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0130 21:35:32.744593  664102 request.go:629] Waited for 196.357237ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:35:32.744667  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:35:32.744673  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:32.744681  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:32.744689  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:32.747369  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:32.747393  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:32.747400  664102 round_trippers.go:580]     Audit-Id: 9a45e4c4-3c60-467a-a035-e7e17693f46b
	I0130 21:35:32.747406  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:32.747411  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:32.747416  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:32.747421  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:32.747426  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:32 GMT
	I0130 21:35:32.748006  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m03","uid":"f8b13ad8-e768-466a-b155-3ab55af16d96","resourceVersion":"702","creationTimestamp":"2024-01-30T21:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_27_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0130 21:35:32.748309  664102 pod_ready.go:92] pod "kube-proxy-lwg96" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:32.748328  664102 pod_ready.go:81] duration metric: took 400.511028ms waiting for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:32.748337  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:32.944475  664102 request.go:629] Waited for 196.044738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:35:32.944546  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:35:32.944551  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:32.944559  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:32.944565  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:32.947024  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:32.947044  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:32.947051  664102 round_trippers.go:580]     Audit-Id: c275ad3a-e9d4-4709-9301-7555bcdc6ceb
	I0130 21:35:32.947061  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:32.947066  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:32.947071  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:32.947076  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:32.947084  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:32 GMT
	I0130 21:35:32.947494  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s9pwd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6594579-7b2f-4ab5-b7f2-0b176bad1705","resourceVersion":"479","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0130 21:35:33.144078  664102 request.go:629] Waited for 195.969105ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:35:33.144162  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:35:33.144174  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:33.144186  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:33.144201  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:33.146538  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:33.146559  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:33.146569  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:33.146574  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:33.146579  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:33.146587  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:33.146592  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:33 GMT
	I0130 21:35:33.146600  664102 round_trippers.go:580]     Audit-Id: 3eb6a53e-37cb-4847-9980-a5df4f9fe25c
	I0130 21:35:33.146754  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m02","uid":"47058aff-0457-4267-b98b-c3be7d21f2dc","resourceVersion":"708","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_27_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0130 21:35:33.147055  664102 pod_ready.go:92] pod "kube-proxy-s9pwd" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:33.147073  664102 pod_ready.go:81] duration metric: took 398.729127ms waiting for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:33.147081  664102 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:33.344058  664102 request.go:629] Waited for 196.88498ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:35:33.344159  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:35:33.344170  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:33.344179  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:33.344188  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:33.346822  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:33.346848  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:33.346858  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:33 GMT
	I0130 21:35:33.346867  664102 round_trippers.go:580]     Audit-Id: 771ae50f-fde2-4bc4-8ec2-39be46430568
	I0130 21:35:33.346877  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:33.346883  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:33.346891  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:33.346896  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:33.347173  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-721181","namespace":"kube-system","uid":"d7e4675b-0e8c-46de-9b39-435d25004a88","resourceVersion":"765","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"48930a2236670664c600a427fcb648de","kubernetes.io/config.mirror":"48930a2236670664c600a427fcb648de","kubernetes.io/config.seen":"2024-01-30T21:24:57.236041601Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0130 21:35:33.544976  664102 request.go:629] Waited for 197.395106ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:33.545074  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:33.545084  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:33.545092  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:33.545099  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:33.547151  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:33.547174  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:33.547182  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:33.547187  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:33 GMT
	I0130 21:35:33.547192  664102 round_trippers.go:580]     Audit-Id: 49462951-865d-4e19-9a0f-445c9b8e621e
	I0130 21:35:33.547201  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:33.547206  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:33.547214  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:33.547468  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:33.547833  664102 pod_ready.go:97] node "multinode-721181" hosting pod "kube-scheduler-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:33.547861  664102 pod_ready.go:81] duration metric: took 400.768766ms waiting for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	E0130 21:35:33.547873  664102 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-721181" hosting pod "kube-scheduler-multinode-721181" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-721181" has status "Ready":"False"
	I0130 21:35:33.547888  664102 pod_ready.go:38] duration metric: took 1.759906089s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:35:33.547908  664102 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 21:35:33.567428  664102 command_runner.go:130] > -16
	I0130 21:35:33.567629  664102 ops.go:34] apiserver oom_adj: -16
	I0130 21:35:33.567651  664102 kubeadm.go:640] restartCluster took 22.664003942s
	I0130 21:35:33.567666  664102 kubeadm.go:406] StartCluster complete in 22.710040218s
	I0130 21:35:33.567689  664102 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:35:33.567800  664102 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:35:33.568762  664102 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:35:33.569011  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 21:35:33.569131  664102 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 21:35:33.569298  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:35:33.571847  664102 out.go:177] * Enabled addons: 
	I0130 21:35:33.569356  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:35:33.573064  664102 addons.go:505] enable addons completed in 3.938872ms: enabled=[]
	I0130 21:35:33.573298  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:35:33.573697  664102 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 21:35:33.573709  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:33.573716  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:33.573722  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:33.576825  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:33.576843  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:33.576853  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:33.576867  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:33.576876  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:33.576883  664102 round_trippers.go:580]     Content-Length: 291
	I0130 21:35:33.576894  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:33 GMT
	I0130 21:35:33.576901  664102 round_trippers.go:580]     Audit-Id: 1571f07b-cb99-4eac-a543-3d653dec0882
	I0130 21:35:33.576907  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:33.576968  664102 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f33652aa-ee2d-484a-8c79-9724e39fcaab","resourceVersion":"813","creationTimestamp":"2024-01-30T21:24:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 21:35:33.577142  664102 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-721181" context rescaled to 1 replicas
	I0130 21:35:33.577177  664102 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 21:35:33.578647  664102 out.go:177] * Verifying Kubernetes components...
	I0130 21:35:33.579924  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:35:33.667910  664102 command_runner.go:130] > apiVersion: v1
	I0130 21:35:33.667932  664102 command_runner.go:130] > data:
	I0130 21:35:33.667937  664102 command_runner.go:130] >   Corefile: |
	I0130 21:35:33.667947  664102 command_runner.go:130] >     .:53 {
	I0130 21:35:33.667951  664102 command_runner.go:130] >         log
	I0130 21:35:33.667956  664102 command_runner.go:130] >         errors
	I0130 21:35:33.667960  664102 command_runner.go:130] >         health {
	I0130 21:35:33.667973  664102 command_runner.go:130] >            lameduck 5s
	I0130 21:35:33.667977  664102 command_runner.go:130] >         }
	I0130 21:35:33.667982  664102 command_runner.go:130] >         ready
	I0130 21:35:33.667987  664102 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0130 21:35:33.667992  664102 command_runner.go:130] >            pods insecure
	I0130 21:35:33.667998  664102 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0130 21:35:33.668002  664102 command_runner.go:130] >            ttl 30
	I0130 21:35:33.668009  664102 command_runner.go:130] >         }
	I0130 21:35:33.668013  664102 command_runner.go:130] >         prometheus :9153
	I0130 21:35:33.668017  664102 command_runner.go:130] >         hosts {
	I0130 21:35:33.668022  664102 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0130 21:35:33.668029  664102 command_runner.go:130] >            fallthrough
	I0130 21:35:33.668032  664102 command_runner.go:130] >         }
	I0130 21:35:33.668037  664102 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0130 21:35:33.668042  664102 command_runner.go:130] >            max_concurrent 1000
	I0130 21:35:33.668048  664102 command_runner.go:130] >         }
	I0130 21:35:33.668052  664102 command_runner.go:130] >         cache 30
	I0130 21:35:33.668064  664102 command_runner.go:130] >         loop
	I0130 21:35:33.668071  664102 command_runner.go:130] >         reload
	I0130 21:35:33.668076  664102 command_runner.go:130] >         loadbalance
	I0130 21:35:33.668080  664102 command_runner.go:130] >     }
	I0130 21:35:33.668084  664102 command_runner.go:130] > kind: ConfigMap
	I0130 21:35:33.668088  664102 command_runner.go:130] > metadata:
	I0130 21:35:33.668093  664102 command_runner.go:130] >   creationTimestamp: "2024-01-30T21:24:57Z"
	I0130 21:35:33.668097  664102 command_runner.go:130] >   name: coredns
	I0130 21:35:33.668102  664102 command_runner.go:130] >   namespace: kube-system
	I0130 21:35:33.668107  664102 command_runner.go:130] >   resourceVersion: "354"
	I0130 21:35:33.668111  664102 command_runner.go:130] >   uid: ff4213a9-7d13-4501-9687-97b80312ab56
	I0130 21:35:33.671602  664102 node_ready.go:35] waiting up to 6m0s for node "multinode-721181" to be "Ready" ...
	I0130 21:35:33.671827  664102 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 21:35:33.744979  664102 request.go:629] Waited for 73.233151ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:33.745056  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:33.745066  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:33.745075  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:33.745085  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:33.747626  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:33.747649  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:33.747659  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:33 GMT
	I0130 21:35:33.747668  664102 round_trippers.go:580]     Audit-Id: 14c70931-9417-4d3a-8866-c2888729515a
	I0130 21:35:33.747675  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:33.747689  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:33.747697  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:33.747708  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:33.748034  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:34.172708  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:34.172736  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:34.172744  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:34.172750  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:34.175045  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:34.175074  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:34.175085  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:34.175095  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:34.175105  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:34 GMT
	I0130 21:35:34.175110  664102 round_trippers.go:580]     Audit-Id: 3ef92d58-1389-4b14-975c-a917c08eb4bc
	I0130 21:35:34.175118  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:34.175135  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:34.175385  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:34.672635  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:34.672663  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:34.672672  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:34.672678  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:34.676014  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:34.676049  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:34.676060  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:34 GMT
	I0130 21:35:34.676069  664102 round_trippers.go:580]     Audit-Id: 3b881c95-2d78-4c21-b607-23497b3cb1aa
	I0130 21:35:34.676077  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:34.676085  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:34.676093  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:34.676106  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:34.676538  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:35.172152  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:35.172183  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:35.172194  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:35.172201  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:35.175492  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:35.175520  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:35.175531  664102 round_trippers.go:580]     Audit-Id: ff02625a-a5e4-480c-816f-6318e41cfc6d
	I0130 21:35:35.175539  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:35.175547  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:35.175556  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:35.175564  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:35.175576  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:35 GMT
	I0130 21:35:35.175774  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:35.672901  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:35.673012  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:35.673026  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:35.673035  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:35.675645  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:35.675677  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:35.675684  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:35.675693  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:35.675701  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:35.675710  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:35 GMT
	I0130 21:35:35.675719  664102 round_trippers.go:580]     Audit-Id: c01688ef-af5f-4e21-a9c7-8c21d1c2bcb4
	I0130 21:35:35.675726  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:35.676052  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:35.676384  664102 node_ready.go:58] node "multinode-721181" has status "Ready":"False"
	I0130 21:35:36.172722  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:36.172760  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:36.172769  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:36.172775  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:36.175483  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:36.175507  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:36.175515  664102 round_trippers.go:580]     Audit-Id: e8ddcf95-b1c9-4c84-921d-590ffcafe576
	I0130 21:35:36.175520  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:36.175525  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:36.175530  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:36.175535  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:36.175540  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:36 GMT
	I0130 21:35:36.175871  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:36.672616  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:36.672643  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:36.672651  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:36.672658  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:36.675689  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:36.675717  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:36.675728  664102 round_trippers.go:580]     Audit-Id: bb87db9b-9b20-4583-9687-45b76f73e8d0
	I0130 21:35:36.675735  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:36.675743  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:36.675751  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:36.675765  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:36.675780  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:36 GMT
	I0130 21:35:36.675987  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:37.172650  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:37.172679  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:37.172691  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:37.172699  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:37.176471  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:37.176505  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:37.176518  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:37.176527  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:37 GMT
	I0130 21:35:37.176535  664102 round_trippers.go:580]     Audit-Id: 53fcf6f2-29bd-47fb-867c-28ee458c64b4
	I0130 21:35:37.176543  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:37.176550  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:37.176557  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:37.176763  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:37.672476  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:37.672506  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:37.672515  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:37.672521  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:37.675077  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:37.675100  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:37.675110  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:37.675120  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:37.675129  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:37.675138  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:37 GMT
	I0130 21:35:37.675155  664102 round_trippers.go:580]     Audit-Id: 75b20991-84f6-46dd-aedd-0f33410b6795
	I0130 21:35:37.675160  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:37.675263  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:38.172436  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:38.172461  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:38.172470  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:38.172476  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:38.175376  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:38.175404  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:38.175414  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:38 GMT
	I0130 21:35:38.175424  664102 round_trippers.go:580]     Audit-Id: a4487cd9-acb6-4e7e-9322-53d966f1aad8
	I0130 21:35:38.175432  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:38.175439  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:38.175444  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:38.175450  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:38.175952  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:38.176259  664102 node_ready.go:58] node "multinode-721181" has status "Ready":"False"
	I0130 21:35:38.672667  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:38.672700  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:38.672711  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:38.672722  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:38.675577  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:38.675602  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:38.675613  664102 round_trippers.go:580]     Audit-Id: 80629eb5-9bfb-451e-8090-363f87818a3c
	I0130 21:35:38.675622  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:38.675631  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:38.675646  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:38.675655  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:38.675661  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:38 GMT
	I0130 21:35:38.675881  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"710","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0130 21:35:39.172518  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:39.172545  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.172553  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.172560  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.176577  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:39.176596  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.176603  664102 round_trippers.go:580]     Audit-Id: af4d2bad-6847-425d-ba8f-47bb4643e62d
	I0130 21:35:39.176612  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.176619  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.176627  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.176634  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.176641  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.176839  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:39.177181  664102 node_ready.go:49] node "multinode-721181" has status "Ready":"True"
	I0130 21:35:39.177206  664102 node_ready.go:38] duration metric: took 5.505565878s waiting for node "multinode-721181" to be "Ready" ...
	I0130 21:35:39.177219  664102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:35:39.177291  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:39.177304  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.177314  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.177325  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.181347  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:39.181366  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.181374  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.181381  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.181389  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.181397  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.181405  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.181413  664102 round_trippers.go:580]     Audit-Id: 330358ce-209b-47aa-8384-0a95476f2063
	I0130 21:35:39.183218  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"835"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82957 chars]
	I0130 21:35:39.185717  664102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:39.185807  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:39.185816  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.185823  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.185831  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.190026  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:39.190042  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.190048  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.190053  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.190058  664102 round_trippers.go:580]     Audit-Id: dcdf1196-640e-4170-bd32-b11b168600eb
	I0130 21:35:39.190063  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.190068  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.190074  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.190217  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:39.190641  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:39.190656  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.190663  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.190669  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.194340  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:39.194359  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.194364  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.194369  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.194374  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.194380  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.194385  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.194393  664102 round_trippers.go:580]     Audit-Id: f163b7b9-1ed1-43ad-a206-b1fbba3e4137
	I0130 21:35:39.195380  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:39.685976  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:39.686002  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.686026  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.686032  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.689098  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:39.689118  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.689124  664102 round_trippers.go:580]     Audit-Id: 7cb2c12e-8901-4fff-9866-8e95ad04eae6
	I0130 21:35:39.689131  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.689136  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.689143  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.689148  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.689154  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.689390  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:39.689880  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:39.689897  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:39.689904  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:39.689911  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:39.694097  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:39.694118  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:39.694125  664102 round_trippers.go:580]     Audit-Id: bab95294-813e-4823-a734-3da5ed108fdd
	I0130 21:35:39.694130  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:39.694135  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:39.694141  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:39.694149  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:39.694158  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:39 GMT
	I0130 21:35:39.694763  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:40.186172  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:40.186198  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:40.186206  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:40.186212  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:40.189290  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:40.189311  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:40.189320  664102 round_trippers.go:580]     Audit-Id: 9ba4cb07-e26b-4c2b-b436-46107b72b60a
	I0130 21:35:40.189326  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:40.189331  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:40.189343  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:40.189353  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:40.189361  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:40 GMT
	I0130 21:35:40.189594  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:40.190171  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:40.190190  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:40.190201  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:40.190211  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:40.192774  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:40.192796  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:40.192804  664102 round_trippers.go:580]     Audit-Id: 49bb3ec6-cacf-4592-a502-6e3af3bb09dd
	I0130 21:35:40.192809  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:40.192814  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:40.192824  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:40.192829  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:40.192837  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:40 GMT
	I0130 21:35:40.193028  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:40.686843  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:40.686871  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:40.686879  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:40.686885  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:40.693724  664102 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0130 21:35:40.693750  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:40.693759  664102 round_trippers.go:580]     Audit-Id: af8e1848-4529-44e0-b8cc-a314575025d6
	I0130 21:35:40.693767  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:40.693775  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:40.693784  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:40.693798  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:40.693809  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:40 GMT
	I0130 21:35:40.694061  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:40.694664  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:40.694681  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:40.694689  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:40.694694  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:40.696719  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:40.696738  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:40.696744  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:40 GMT
	I0130 21:35:40.696750  664102 round_trippers.go:580]     Audit-Id: 4c8655b8-2a8c-4e80-b178-caa66ed2a5ed
	I0130 21:35:40.696755  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:40.696760  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:40.696765  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:40.696772  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:40.696944  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:41.186878  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:41.186908  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:41.186917  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:41.186923  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:41.189698  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:41.189726  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:41.189736  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:41.189744  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:41.189752  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:41 GMT
	I0130 21:35:41.189760  664102 round_trippers.go:580]     Audit-Id: 619e59cc-7e24-42ea-ac63-6a8f956e0cb1
	I0130 21:35:41.189768  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:41.189783  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:41.190019  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:41.190675  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:41.190698  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:41.190708  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:41.190716  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:41.193234  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:41.193256  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:41.193265  664102 round_trippers.go:580]     Audit-Id: 40969afa-747a-4521-8a10-bc0fb2d613e6
	I0130 21:35:41.193273  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:41.193291  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:41.193299  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:41.193307  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:41.193319  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:41 GMT
	I0130 21:35:41.193495  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:41.193902  664102 pod_ready.go:102] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"False"
	I0130 21:35:41.686788  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:41.686813  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:41.686821  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:41.686827  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:41.691329  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:41.691353  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:41.691365  664102 round_trippers.go:580]     Audit-Id: 45e4b0d5-d5a6-4596-9c18-416d813456cc
	I0130 21:35:41.691373  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:41.691381  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:41.691390  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:41.691400  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:41.691416  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:41 GMT
	I0130 21:35:41.691598  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:41.692116  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:41.692136  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:41.692144  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:41.692149  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:41.697369  664102 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0130 21:35:41.697389  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:41.697397  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:41.697421  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:41.697441  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:41.697458  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:41.697477  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:41 GMT
	I0130 21:35:41.697489  664102 round_trippers.go:580]     Audit-Id: 164362d1-c966-46a4-a7f7-637d80f031b8
	I0130 21:35:41.697625  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:42.186050  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:42.186078  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:42.186087  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:42.186093  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:42.189655  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:42.189678  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:42.189688  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:42.189696  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:42.189703  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:42.189710  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:42 GMT
	I0130 21:35:42.189719  664102 round_trippers.go:580]     Audit-Id: 8fb8d1d9-ef73-494c-a6ce-f24ec1532710
	I0130 21:35:42.189731  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:42.189936  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:42.190439  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:42.190457  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:42.190467  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:42.190476  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:42.192752  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:42.192774  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:42.192783  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:42.192792  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:42 GMT
	I0130 21:35:42.192800  664102 round_trippers.go:580]     Audit-Id: 993b9b02-a299-4ff7-bc5b-50bb75d563fd
	I0130 21:35:42.192809  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:42.192817  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:42.192833  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:42.192954  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:42.686591  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:42.686623  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:42.686641  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:42.686649  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:42.689601  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:42.689623  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:42.689631  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:42.689636  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:42.689641  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:42.689647  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:42 GMT
	I0130 21:35:42.689652  664102 round_trippers.go:580]     Audit-Id: 365b99f5-85a1-4661-891d-54c7ab854dda
	I0130 21:35:42.689669  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:42.689838  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:42.690419  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:42.690440  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:42.690451  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:42.690461  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:42.692495  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:42.692526  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:42.692536  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:42.692546  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:42.692556  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:42 GMT
	I0130 21:35:42.692569  664102 round_trippers.go:580]     Audit-Id: 28cd76b5-0f11-4e69-8e28-99dc0e70452c
	I0130 21:35:42.692578  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:42.692593  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:42.692692  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:43.186780  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:43.186808  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:43.186821  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:43.186830  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:43.189925  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:43.189974  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:43.189989  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:43.190001  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:43.190013  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:43.190020  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:43.190026  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:43 GMT
	I0130 21:35:43.190035  664102 round_trippers.go:580]     Audit-Id: 249c20e6-49c9-490f-ac9e-221b35b7333f
	I0130 21:35:43.190232  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:43.190840  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:43.190862  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:43.190874  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:43.190888  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:43.193087  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:43.193105  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:43.193114  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:43.193122  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:43.193135  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:43 GMT
	I0130 21:35:43.193148  664102 round_trippers.go:580]     Audit-Id: 9202633c-ea6b-4ded-9ad1-6d59c5f1ee48
	I0130 21:35:43.193156  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:43.193169  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:43.193336  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:43.685928  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:43.685957  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:43.685972  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:43.685981  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:43.688708  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:43.688729  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:43.688736  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:43 GMT
	I0130 21:35:43.688742  664102 round_trippers.go:580]     Audit-Id: 090fca4f-d64c-4a70-9eec-1682596de400
	I0130 21:35:43.688747  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:43.688754  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:43.688761  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:43.688771  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:43.689022  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:43.689538  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:43.689555  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:43.689566  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:43.689574  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:43.691618  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:43.691640  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:43.691648  664102 round_trippers.go:580]     Audit-Id: 98efc9a4-4a66-4116-a633-4079d40598ae
	I0130 21:35:43.691654  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:43.691658  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:43.691663  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:43.691671  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:43.691684  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:43 GMT
	I0130 21:35:43.691843  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:43.692250  664102 pod_ready.go:102] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"False"
	I0130 21:35:44.186541  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:44.186572  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:44.186584  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:44.186593  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:44.189969  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:44.189992  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:44.190000  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:44.190009  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:44.190018  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:44.190026  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:44 GMT
	I0130 21:35:44.190039  664102 round_trippers.go:580]     Audit-Id: 45b02454-061d-420b-a475-b053832af7c9
	I0130 21:35:44.190047  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:44.190229  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:44.190715  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:44.190732  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:44.190743  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:44.190756  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:44.193083  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:44.193106  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:44.193116  664102 round_trippers.go:580]     Audit-Id: 8b5f02d1-b85a-476d-981b-d26a382f4149
	I0130 21:35:44.193125  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:44.193133  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:44.193148  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:44.193155  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:44.193165  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:44 GMT
	I0130 21:35:44.193413  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:44.686093  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:44.686124  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:44.686133  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:44.686139  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:44.689127  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:44.689147  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:44.689159  664102 round_trippers.go:580]     Audit-Id: ac778d50-20bf-4107-94d1-9907a6963836
	I0130 21:35:44.689166  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:44.689174  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:44.689179  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:44.689186  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:44.689193  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:44 GMT
	I0130 21:35:44.689446  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:44.690070  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:44.690088  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:44.690095  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:44.690101  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:44.692434  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:44.692455  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:44.692462  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:44.692467  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:44.692472  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:44.692481  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:44.692486  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:44 GMT
	I0130 21:35:44.692492  664102 round_trippers.go:580]     Audit-Id: b46f385e-8228-4463-9da1-f5fcb7e37cb2
	I0130 21:35:44.692664  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:45.186008  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:45.186031  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:45.186040  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:45.186046  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:45.190248  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:45.190268  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:45.190275  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:45.190280  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:45 GMT
	I0130 21:35:45.190286  664102 round_trippers.go:580]     Audit-Id: c64519a3-251b-4812-b5d1-db75d0d3c8c7
	I0130 21:35:45.190291  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:45.190296  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:45.190301  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:45.190514  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:45.191167  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:45.191188  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:45.191196  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:45.191205  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:45.193303  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:45.193318  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:45.193326  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:45.193335  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:45.193344  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:45.193355  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:45 GMT
	I0130 21:35:45.193363  664102 round_trippers.go:580]     Audit-Id: ac273cd7-7272-46fb-9105-01208807c4a2
	I0130 21:35:45.193374  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:45.193785  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:45.686434  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:45.686462  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:45.686470  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:45.686477  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:45.689415  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:45.689434  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:45.689441  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:45.689447  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:45 GMT
	I0130 21:35:45.689452  664102 round_trippers.go:580]     Audit-Id: 7a4af1b5-5630-4f2b-86a4-579ed13eaf6e
	I0130 21:35:45.689457  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:45.689462  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:45.689479  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:45.689773  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:45.690212  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:45.690225  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:45.690232  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:45.690238  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:45.693276  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:45.693301  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:45.693312  664102 round_trippers.go:580]     Audit-Id: 51c0d0aa-0942-44ea-8232-4cf6009eabb7
	I0130 21:35:45.693351  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:45.693369  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:45.693378  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:45.693391  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:45.693400  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:45 GMT
	I0130 21:35:45.693564  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:45.693976  664102 pod_ready.go:102] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"False"
	I0130 21:35:46.186213  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:46.186251  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.186263  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.186273  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.190153  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:46.190184  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.190194  664102 round_trippers.go:580]     Audit-Id: d42ab2b1-cb8f-447b-a557-2602b6927783
	I0130 21:35:46.190203  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.190211  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.190219  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.190227  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.190236  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.190480  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"774","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0130 21:35:46.191108  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.191130  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.191141  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.191150  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.194386  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:46.194403  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.194409  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.194418  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.194424  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.194428  664102 round_trippers.go:580]     Audit-Id: dfd1fb05-274a-4aea-969e-793165492ddd
	I0130 21:35:46.194433  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.194442  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.195387  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.686086  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:35:46.686118  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.686132  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.686142  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.689781  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:46.689809  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.689820  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.689827  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.689833  664102 round_trippers.go:580]     Audit-Id: bb961ebf-14b7-46fb-b137-90740eced946
	I0130 21:35:46.689839  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.689848  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.689856  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.690236  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 21:35:46.690847  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.690864  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.690872  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.690878  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.693270  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:46.693286  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.693292  664102 round_trippers.go:580]     Audit-Id: dae50565-d5cb-4748-bd7f-896b5a745579
	I0130 21:35:46.693298  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.693303  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.693308  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.693313  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.693326  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.693808  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.694107  664102 pod_ready.go:92] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:46.694125  664102 pod_ready.go:81] duration metric: took 7.508388632s waiting for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.694133  664102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.694186  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-721181
	I0130 21:35:46.694194  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.694200  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.694206  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.696378  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:46.696399  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.696408  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.696416  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.696431  664102 round_trippers.go:580]     Audit-Id: 6e41294c-6d0c-4569-b69f-d79dc9fccf20
	I0130 21:35:46.696440  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.696451  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.696462  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.696611  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-721181","namespace":"kube-system","uid":"83f20d3f-5604-4e3c-a7c8-b38a9b20c035","resourceVersion":"838","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.mirror":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.seen":"2024-01-30T21:24:57.236042745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 21:35:46.696989  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.697003  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.697010  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.697019  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.699501  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:46.699521  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.699529  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.699537  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.699546  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.699557  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.699565  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.699580  664102 round_trippers.go:580]     Audit-Id: 884cfe55-b58a-4fbc-a675-eab779961dd5
	I0130 21:35:46.699719  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.700029  664102 pod_ready.go:92] pod "etcd-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:46.700044  664102 pod_ready.go:81] duration metric: took 5.904536ms waiting for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.700062  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.700121  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-721181
	I0130 21:35:46.700131  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.700137  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.700147  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.702345  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:46.702366  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.702375  664102 round_trippers.go:580]     Audit-Id: 6d742742-07f7-47a0-8eb9-3c372b08f3da
	I0130 21:35:46.702385  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.702393  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.702400  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.702412  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.702422  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.702568  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-721181","namespace":"kube-system","uid":"fbcc53e1-4691-4473-b215-2cb6daeaf321","resourceVersion":"850","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.mirror":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.seen":"2024-01-30T21:24:57.236043778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 21:35:46.703023  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.703042  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.703051  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.703058  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.704673  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:46.704687  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.704693  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.704699  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.704705  664102 round_trippers.go:580]     Audit-Id: 728f18c0-999d-401f-ba9d-a0fdcf03f723
	I0130 21:35:46.704714  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.704721  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.704729  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.704930  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.705181  664102 pod_ready.go:92] pod "kube-apiserver-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:46.705199  664102 pod_ready.go:81] duration metric: took 5.12652ms waiting for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.705207  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.705249  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-721181
	I0130 21:35:46.705257  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.705264  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.705270  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.707079  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:46.707097  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.707103  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.707109  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.707114  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.707119  664102 round_trippers.go:580]     Audit-Id: d3d38bac-f8b0-4c31-a370-42e492b57ff2
	I0130 21:35:46.707125  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.707135  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.707333  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-721181","namespace":"kube-system","uid":"de8beec4-5cad-4405-b856-7475b95559ba","resourceVersion":"837","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.mirror":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.seen":"2024-01-30T21:24:57.236037857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 21:35:46.707697  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.707711  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.707718  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.707723  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.709483  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:46.709497  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.709506  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.709514  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.709523  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.709531  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.709540  664102 round_trippers.go:580]     Audit-Id: 7bc1284f-e4d1-4ed0-bc1e-9ed520f423e9
	I0130 21:35:46.709553  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.709741  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.710100  664102 pod_ready.go:92] pod "kube-controller-manager-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:46.710120  664102 pod_ready.go:81] duration metric: took 4.907179ms waiting for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.710130  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.710172  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:35:46.710181  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.710188  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.710194  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.712028  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:46.712042  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.712048  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.712054  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.712059  664102 round_trippers.go:580]     Audit-Id: 2e5a8213-7328-4629-ab1f-b11af1eb9bbd
	I0130 21:35:46.712064  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.712069  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.712074  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.712366  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-49rq4","generateName":"kube-proxy-","namespace":"kube-system","uid":"63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3","resourceVersion":"812","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 21:35:46.712669  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:46.712679  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.712688  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.712693  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.715729  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:46.715749  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.715759  664102 round_trippers.go:580]     Audit-Id: e6463bc0-c34e-4ea3-931f-6fce7f316edc
	I0130 21:35:46.715767  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.715775  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.715783  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.715794  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.715803  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.715956  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:46.716316  664102 pod_ready.go:92] pod "kube-proxy-49rq4" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:46.716340  664102 pod_ready.go:81] duration metric: took 6.203947ms waiting for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.716353  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:46.886746  664102 request.go:629] Waited for 170.327711ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:35:46.886817  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:35:46.886822  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:46.886830  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:46.886836  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:46.889612  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:46.889638  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:46.889647  664102 round_trippers.go:580]     Audit-Id: 7eb859ae-4857-416a-a34a-2255e8073d18
	I0130 21:35:46.889654  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:46.889661  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:46.889668  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:46.889675  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:46.889682  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:46 GMT
	I0130 21:35:46.889871  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwg96","generateName":"kube-proxy-","namespace":"kube-system","uid":"68cc319c-45c4-4a65-9712-d4e419acd7d6","resourceVersion":"681","creationTimestamp":"2024-01-30T21:26:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0130 21:35:47.086894  664102 request.go:629] Waited for 196.470978ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:35:47.086995  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:35:47.087010  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.087023  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.087042  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.096335  664102 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0130 21:35:47.096358  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.096365  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.096371  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.096376  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.096381  664102 round_trippers.go:580]     Audit-Id: 84da9bca-c2c8-4adc-b3e8-acaa4b2bcc26
	I0130 21:35:47.096386  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.096391  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.097213  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m03","uid":"f8b13ad8-e768-466a-b155-3ab55af16d96","resourceVersion":"702","creationTimestamp":"2024-01-30T21:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_27_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach [truncated 3965 chars]
	I0130 21:35:47.097582  664102 pod_ready.go:92] pod "kube-proxy-lwg96" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:47.097602  664102 pod_ready.go:81] duration metric: took 381.240565ms waiting for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:47.097615  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:47.286685  664102 request.go:629] Waited for 188.9625ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:35:47.286767  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:35:47.286775  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.286791  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.286804  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.289576  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:47.289598  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.289606  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.289616  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.289628  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.289640  664102 round_trippers.go:580]     Audit-Id: 66e5cf8c-dcad-41e6-a490-d463f0af136b
	I0130 21:35:47.289652  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.289664  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.290090  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s9pwd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6594579-7b2f-4ab5-b7f2-0b176bad1705","resourceVersion":"479","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5522 chars]
	I0130 21:35:47.486892  664102 request.go:629] Waited for 196.368258ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:35:47.486979  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:35:47.486984  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.486993  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.486999  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.489619  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:47.489639  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.489648  664102 round_trippers.go:580]     Audit-Id: 0a908b59-5647-4258-8678-f9497e12f45d
	I0130 21:35:47.489659  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.489667  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.489676  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.489685  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.489696  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.489822  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m02","uid":"47058aff-0457-4267-b98b-c3be7d21f2dc","resourceVersion":"708","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_27_34_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotat
ions":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"manager":"ku [truncated 4235 chars]
	I0130 21:35:47.490196  664102 pod_ready.go:92] pod "kube-proxy-s9pwd" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:47.490217  664102 pod_ready.go:81] duration metric: took 392.589806ms waiting for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:47.490229  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:47.686283  664102 request.go:629] Waited for 195.970408ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:35:47.686376  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:35:47.686390  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.686403  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.686417  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.689027  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:47.689048  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.689055  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.689060  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.689066  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.689074  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.689083  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.689104  664102 round_trippers.go:580]     Audit-Id: c32c6170-2ef7-4976-bb7a-1bd3b2352999
	I0130 21:35:47.689355  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-721181","namespace":"kube-system","uid":"d7e4675b-0e8c-46de-9b39-435d25004a88","resourceVersion":"852","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"48930a2236670664c600a427fcb648de","kubernetes.io/config.mirror":"48930a2236670664c600a427fcb648de","kubernetes.io/config.seen":"2024-01-30T21:24:57.236041601Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 21:35:47.887095  664102 request.go:629] Waited for 197.346357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:47.887161  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:35:47.887173  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.887187  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.887203  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.890067  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:47.890094  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.890105  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.890115  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.890123  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.890129  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.890134  664102 round_trippers.go:580]     Audit-Id: 7a0abb2f-a868-40b0-a8ad-6a023185070f
	I0130 21:35:47.890139  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.890539  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0130 21:35:47.890968  664102 pod_ready.go:92] pod "kube-scheduler-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:35:47.890989  664102 pod_ready.go:81] duration metric: took 400.747646ms waiting for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:35:47.891003  664102 pod_ready.go:38] duration metric: took 8.713767989s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:35:47.891021  664102 api_server.go:52] waiting for apiserver process to appear ...
	I0130 21:35:47.891095  664102 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:35:47.904908  664102 command_runner.go:130] > 1099
	I0130 21:35:47.905060  664102 api_server.go:72] duration metric: took 14.327855144s to wait for apiserver process to appear ...
	I0130 21:35:47.905085  664102 api_server.go:88] waiting for apiserver healthz status ...
	I0130 21:35:47.905108  664102 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:35:47.911066  664102 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0130 21:35:47.911173  664102 round_trippers.go:463] GET https://192.168.39.174:8443/version
	I0130 21:35:47.911188  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:47.911199  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:47.911208  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:47.912349  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:35:47.912370  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:47.912377  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:47.912383  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:47.912388  664102 round_trippers.go:580]     Content-Length: 264
	I0130 21:35:47.912393  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:47 GMT
	I0130 21:35:47.912398  664102 round_trippers.go:580]     Audit-Id: dde5df70-8350-44b1-8b5b-4ee7da0417e6
	I0130 21:35:47.912403  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:47.912411  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:47.912429  664102 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.4",
	  "gitCommit": "bae2c62678db2b5053817bc97181fcc2e8388103",
	  "gitTreeState": "clean",
	  "buildDate": "2023-11-15T16:48:54Z",
	  "goVersion": "go1.20.11",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0130 21:35:47.912476  664102 api_server.go:141] control plane version: v1.28.4
	I0130 21:35:47.912491  664102 api_server.go:131] duration metric: took 7.399256ms to wait for apiserver health ...
	I0130 21:35:47.912507  664102 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 21:35:48.086096  664102 request.go:629] Waited for 173.501356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:48.086170  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:48.086175  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:48.086183  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:48.086189  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:48.090436  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:35:48.090467  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:48.090477  664102 round_trippers.go:580]     Audit-Id: 18646f5e-07b5-4575-a110-511fa8b72bab
	I0130 21:35:48.090486  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:48.090500  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:48.090509  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:48.090518  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:48.090527  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:48 GMT
	I0130 21:35:48.091358  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0130 21:35:48.094755  664102 system_pods.go:59] 12 kube-system pods found
	I0130 21:35:48.094786  664102 system_pods.go:61] "coredns-5dd5756b68-2jstl" [9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2] Running
	I0130 21:35:48.094794  664102 system_pods.go:61] "etcd-multinode-721181" [83f20d3f-5604-4e3c-a7c8-b38a9b20c035] Running
	I0130 21:35:48.094804  664102 system_pods.go:61] "kindnet-8thzp" [c1610c3b-8a9d-47d4-a204-75648b6b61ab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 21:35:48.094822  664102 system_pods.go:61] "kindnet-qxwqk" [a733f539-7a0f-46d9-b868-9b0d80001474] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 21:35:48.094829  664102 system_pods.go:61] "kindnet-zt7wg" [49dc74c8-c0dc-4421-99f2-b40bcf3429ff] Running
	I0130 21:35:48.094836  664102 system_pods.go:61] "kube-apiserver-multinode-721181" [fbcc53e1-4691-4473-b215-2cb6daeaf321] Running
	I0130 21:35:48.094842  664102 system_pods.go:61] "kube-controller-manager-multinode-721181" [de8beec4-5cad-4405-b856-7475b95559ba] Running
	I0130 21:35:48.094846  664102 system_pods.go:61] "kube-proxy-49rq4" [63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3] Running
	I0130 21:35:48.094854  664102 system_pods.go:61] "kube-proxy-lwg96" [68cc319c-45c4-4a65-9712-d4e419acd7d6] Running
	I0130 21:35:48.094860  664102 system_pods.go:61] "kube-proxy-s9pwd" [e6594579-7b2f-4ab5-b7f2-0b176bad1705] Running
	I0130 21:35:48.094871  664102 system_pods.go:61] "kube-scheduler-multinode-721181" [d7e4675b-0e8c-46de-9b39-435d25004a88] Running
	I0130 21:35:48.094878  664102 system_pods.go:61] "storage-provisioner" [5f9b77ce-6169-4580-ae1c-04759bfcf2d7] Running
	I0130 21:35:48.094886  664102 system_pods.go:74] duration metric: took 182.37031ms to wait for pod list to return data ...
	I0130 21:35:48.094895  664102 default_sa.go:34] waiting for default service account to be created ...
	I0130 21:35:48.286213  664102 request.go:629] Waited for 191.216149ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0130 21:35:48.286294  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/default/serviceaccounts
	I0130 21:35:48.286302  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:48.286319  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:48.286333  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:48.289090  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:35:48.289110  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:48.289117  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:48 GMT
	I0130 21:35:48.289124  664102 round_trippers.go:580]     Audit-Id: 7262828f-0bc7-4dad-b3e9-72fc3538d935
	I0130 21:35:48.289133  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:48.289141  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:48.289150  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:48.289159  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:48.289172  664102 round_trippers.go:580]     Content-Length: 261
	I0130 21:35:48.289199  664102 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"ec7f360e-e53e-471b-ada1-add798e0ad59","resourceVersion":"302","creationTimestamp":"2024-01-30T21:25:10Z"}}]}
	I0130 21:35:48.289436  664102 default_sa.go:45] found service account: "default"
	I0130 21:35:48.289461  664102 default_sa.go:55] duration metric: took 194.558319ms for default service account to be created ...
	I0130 21:35:48.289499  664102 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 21:35:48.486664  664102 request.go:629] Waited for 197.086717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:48.486740  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:35:48.486748  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:48.486761  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:48.486838  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:48.490643  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:48.490667  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:48.490674  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:48.490680  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:48.490685  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:48 GMT
	I0130 21:35:48.490693  664102 round_trippers.go:580]     Audit-Id: b451c650-e502-48c8-b1ec-78458428e797
	I0130 21:35:48.490700  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:48.490709  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:48.492210  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81878 chars]
	I0130 21:35:48.494604  664102 system_pods.go:86] 12 kube-system pods found
	I0130 21:35:48.494631  664102 system_pods.go:89] "coredns-5dd5756b68-2jstl" [9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2] Running
	I0130 21:35:48.494640  664102 system_pods.go:89] "etcd-multinode-721181" [83f20d3f-5604-4e3c-a7c8-b38a9b20c035] Running
	I0130 21:35:48.494652  664102 system_pods.go:89] "kindnet-8thzp" [c1610c3b-8a9d-47d4-a204-75648b6b61ab] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 21:35:48.494662  664102 system_pods.go:89] "kindnet-qxwqk" [a733f539-7a0f-46d9-b868-9b0d80001474] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0130 21:35:48.494671  664102 system_pods.go:89] "kindnet-zt7wg" [49dc74c8-c0dc-4421-99f2-b40bcf3429ff] Running
	I0130 21:35:48.494683  664102 system_pods.go:89] "kube-apiserver-multinode-721181" [fbcc53e1-4691-4473-b215-2cb6daeaf321] Running
	I0130 21:35:48.494692  664102 system_pods.go:89] "kube-controller-manager-multinode-721181" [de8beec4-5cad-4405-b856-7475b95559ba] Running
	I0130 21:35:48.494700  664102 system_pods.go:89] "kube-proxy-49rq4" [63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3] Running
	I0130 21:35:48.494709  664102 system_pods.go:89] "kube-proxy-lwg96" [68cc319c-45c4-4a65-9712-d4e419acd7d6] Running
	I0130 21:35:48.494719  664102 system_pods.go:89] "kube-proxy-s9pwd" [e6594579-7b2f-4ab5-b7f2-0b176bad1705] Running
	I0130 21:35:48.494729  664102 system_pods.go:89] "kube-scheduler-multinode-721181" [d7e4675b-0e8c-46de-9b39-435d25004a88] Running
	I0130 21:35:48.494736  664102 system_pods.go:89] "storage-provisioner" [5f9b77ce-6169-4580-ae1c-04759bfcf2d7] Running
	I0130 21:35:48.494746  664102 system_pods.go:126] duration metric: took 205.234542ms to wait for k8s-apps to be running ...
	I0130 21:35:48.494762  664102 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:35:48.494820  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:35:48.509513  664102 system_svc.go:56] duration metric: took 14.74176ms WaitForService to wait for kubelet.
	I0130 21:35:48.509544  664102 kubeadm.go:581] duration metric: took 14.932342307s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:35:48.509569  664102 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:35:48.686984  664102 request.go:629] Waited for 177.328922ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0130 21:35:48.687065  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0130 21:35:48.687075  664102 round_trippers.go:469] Request Headers:
	I0130 21:35:48.687090  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:35:48.687108  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:35:48.690143  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:35:48.690168  664102 round_trippers.go:577] Response Headers:
	I0130 21:35:48.690184  664102 round_trippers.go:580]     Audit-Id: 71dd4f1f-4bdf-4c6a-aabf-19ed1befc2cb
	I0130 21:35:48.690193  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:35:48.690201  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:35:48.690213  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:35:48.690225  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:35:48.690236  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:35:48 GMT
	I0130 21:35:48.690590  664102 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"870"},"items":[{"metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"835","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 16179 chars]
	I0130 21:35:48.691174  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:48.691194  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:48.691205  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:48.691211  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:48.691217  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:35:48.691223  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:35:48.691233  664102 node_conditions.go:105] duration metric: took 181.657563ms to run NodePressure ...
	I0130 21:35:48.691247  664102 start.go:228] waiting for startup goroutines ...
	I0130 21:35:48.691257  664102 start.go:233] waiting for cluster config update ...
	I0130 21:35:48.691264  664102 start.go:242] writing updated cluster config ...
	I0130 21:35:48.691724  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:35:48.691825  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:35:48.694780  664102 out.go:177] * Starting worker node multinode-721181-m02 in cluster multinode-721181
	I0130 21:35:48.695952  664102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:35:48.695971  664102 cache.go:56] Caching tarball of preloaded images
	I0130 21:35:48.696068  664102 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 21:35:48.696084  664102 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 21:35:48.696179  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:35:48.696338  664102 start.go:365] acquiring machines lock for multinode-721181-m02: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:35:48.696380  664102 start.go:369] acquired machines lock for "multinode-721181-m02" in 23.931µs
	I0130 21:35:48.696393  664102 start.go:96] Skipping create...Using existing machine configuration
	I0130 21:35:48.696400  664102 fix.go:54] fixHost starting: m02
	I0130 21:35:48.696658  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:35:48.696689  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:35:48.711311  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40817
	I0130 21:35:48.711713  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:35:48.712193  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:35:48.712210  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:35:48.712550  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:35:48.712773  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:35:48.712945  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetState
	I0130 21:35:48.714672  664102 fix.go:102] recreateIfNeeded on multinode-721181-m02: state=Running err=<nil>
	W0130 21:35:48.714689  664102 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 21:35:48.716550  664102 out.go:177] * Updating the running kvm2 "multinode-721181-m02" VM ...
	I0130 21:35:48.718034  664102 machine.go:88] provisioning docker machine ...
	I0130 21:35:48.718055  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:35:48.718304  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetMachineName
	I0130 21:35:48.718493  664102 buildroot.go:166] provisioning hostname "multinode-721181-m02"
	I0130 21:35:48.718519  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetMachineName
	I0130 21:35:48.718668  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:35:48.720924  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.721390  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:48.721406  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.721538  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:35:48.721693  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:48.721845  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:48.721994  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:35:48.722160  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:48.722477  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0130 21:35:48.722490  664102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-721181-m02 && echo "multinode-721181-m02" | sudo tee /etc/hostname
	I0130 21:35:48.861121  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-721181-m02
	
	I0130 21:35:48.861161  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:35:48.864304  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.864722  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:48.864748  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.864896  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:35:48.865083  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:48.865241  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:48.865388  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:35:48.865556  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:48.865885  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0130 21:35:48.865911  664102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-721181-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-721181-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-721181-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 21:35:48.986481  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:35:48.986515  664102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 21:35:48.986536  664102 buildroot.go:174] setting up certificates
	I0130 21:35:48.986551  664102 provision.go:83] configureAuth start
	I0130 21:35:48.986565  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetMachineName
	I0130 21:35:48.986853  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetIP
	I0130 21:35:48.989696  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.990108  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:48.990138  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.990244  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:35:48.992306  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.992633  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:48.992657  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:48.992805  664102 provision.go:138] copyHostCerts
	I0130 21:35:48.992836  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:35:48.992864  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 21:35:48.992873  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:35:48.992938  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 21:35:48.993002  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:35:48.993018  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 21:35:48.993024  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:35:48.993048  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 21:35:48.993087  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:35:48.993106  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 21:35:48.993112  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:35:48.993131  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 21:35:48.993173  664102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.multinode-721181-m02 san=[192.168.39.69 192.168.39.69 localhost 127.0.0.1 minikube multinode-721181-m02]
	I0130 21:35:49.253025  664102 provision.go:172] copyRemoteCerts
	I0130 21:35:49.253088  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 21:35:49.253114  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:35:49.256021  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:49.256367  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:49.256392  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:49.256551  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:35:49.256768  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:49.256923  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:35:49.257056  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m02/id_rsa Username:docker}
	I0130 21:35:49.347474  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 21:35:49.347545  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 21:35:49.371047  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 21:35:49.371114  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0130 21:35:49.392216  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 21:35:49.392294  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 21:35:49.413891  664102 provision.go:86] duration metric: configureAuth took 427.32318ms
	I0130 21:35:49.413922  664102 buildroot.go:189] setting minikube options for container-runtime
	I0130 21:35:49.414133  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:35:49.414209  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:35:49.416741  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:49.417153  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:35:49.417184  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:35:49.417392  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:35:49.417655  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:49.417829  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:35:49.417995  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:35:49.418173  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:35:49.418474  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0130 21:35:49.418495  664102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 21:37:19.983709  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 21:37:19.983765  664102 machine.go:91] provisioned docker machine in 1m31.26570065s
	I0130 21:37:19.983783  664102 start.go:300] post-start starting for "multinode-721181-m02" (driver="kvm2")
	I0130 21:37:19.983801  664102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 21:37:19.983837  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:37:19.984178  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 21:37:19.984228  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:37:19.987303  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:19.987762  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:19.987791  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:19.987937  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:37:19.988133  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:37:19.988338  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:37:19.988483  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m02/id_rsa Username:docker}
	I0130 21:37:20.081210  664102 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 21:37:20.085501  664102 command_runner.go:130] > NAME=Buildroot
	I0130 21:37:20.085524  664102 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 21:37:20.085529  664102 command_runner.go:130] > ID=buildroot
	I0130 21:37:20.085541  664102 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 21:37:20.085546  664102 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 21:37:20.085719  664102 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 21:37:20.085740  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 21:37:20.085813  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 21:37:20.085922  664102 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 21:37:20.085936  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /etc/ssl/certs/6477182.pem
	I0130 21:37:20.086041  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 21:37:20.096027  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:37:20.118816  664102 start.go:303] post-start completed in 135.019377ms
	I0130 21:37:20.118841  664102 fix.go:56] fixHost completed within 1m31.422440741s
	I0130 21:37:20.118871  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:37:20.121634  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.122041  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:20.122070  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.122233  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:37:20.122453  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:37:20.122630  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:37:20.122780  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:37:20.122943  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:37:20.123275  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.69 22 <nil> <nil>}
	I0130 21:37:20.123301  664102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 21:37:20.242398  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706650640.233768164
	
	I0130 21:37:20.242427  664102 fix.go:206] guest clock: 1706650640.233768164
	I0130 21:37:20.242438  664102 fix.go:219] Guest: 2024-01-30 21:37:20.233768164 +0000 UTC Remote: 2024-01-30 21:37:20.118847043 +0000 UTC m=+455.153580682 (delta=114.921121ms)
	I0130 21:37:20.242458  664102 fix.go:190] guest clock delta is within tolerance: 114.921121ms
	I0130 21:37:20.242465  664102 start.go:83] releasing machines lock for "multinode-721181-m02", held for 1m31.546076139s
	I0130 21:37:20.242503  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:37:20.242818  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetIP
	I0130 21:37:20.245734  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.246115  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:20.246137  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.248230  664102 out.go:177] * Found network options:
	I0130 21:37:20.249629  664102 out.go:177]   - NO_PROXY=192.168.39.174
	W0130 21:37:20.250884  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 21:37:20.250925  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:37:20.251560  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:37:20.251750  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:37:20.251868  664102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 21:37:20.251914  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	W0130 21:37:20.251987  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 21:37:20.252067  664102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 21:37:20.252098  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:37:20.254768  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.254799  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.255159  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:20.255198  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:20.255224  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.255242  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:20.255298  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:37:20.255409  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:37:20.255486  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:37:20.255586  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:37:20.255629  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:37:20.255730  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m02/id_rsa Username:docker}
	I0130 21:37:20.255782  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:37:20.255911  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m02/id_rsa Username:docker}
	I0130 21:37:20.363221  664102 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 21:37:20.491843  664102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 21:37:20.498213  664102 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 21:37:20.498255  664102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 21:37:20.498315  664102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 21:37:20.508394  664102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0130 21:37:20.508415  664102 start.go:475] detecting cgroup driver to use...
	I0130 21:37:20.508476  664102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 21:37:20.523790  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 21:37:20.536751  664102 docker.go:217] disabling cri-docker service (if available) ...
	I0130 21:37:20.536801  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 21:37:20.549396  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 21:37:20.561958  664102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 21:37:20.699770  664102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 21:37:20.825953  664102 docker.go:233] disabling docker service ...
	I0130 21:37:20.826016  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 21:37:20.845008  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 21:37:20.859080  664102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 21:37:20.993954  664102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 21:37:21.122887  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 21:37:21.138562  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 21:37:21.157018  664102 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 21:37:21.157569  664102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 21:37:21.157633  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:37:21.168089  664102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 21:37:21.168139  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:37:21.177716  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:37:21.188111  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:37:21.198240  664102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 21:37:21.208583  664102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 21:37:21.217452  664102 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0130 21:37:21.217656  664102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 21:37:21.226542  664102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 21:37:21.357191  664102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 21:37:21.579684  664102 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 21:37:21.579769  664102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 21:37:21.585064  664102 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 21:37:21.585089  664102 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 21:37:21.585097  664102 command_runner.go:130] > Device: 16h/22d	Inode: 1212        Links: 1
	I0130 21:37:21.585108  664102 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:37:21.585116  664102 command_runner.go:130] > Access: 2024-01-30 21:37:21.504994646 +0000
	I0130 21:37:21.585134  664102 command_runner.go:130] > Modify: 2024-01-30 21:37:21.504994646 +0000
	I0130 21:37:21.585142  664102 command_runner.go:130] > Change: 2024-01-30 21:37:21.504994646 +0000
	I0130 21:37:21.585148  664102 command_runner.go:130] >  Birth: -
	I0130 21:37:21.585574  664102 start.go:543] Will wait 60s for crictl version
	I0130 21:37:21.585632  664102 ssh_runner.go:195] Run: which crictl
	I0130 21:37:21.589454  664102 command_runner.go:130] > /usr/bin/crictl
	I0130 21:37:21.589680  664102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 21:37:21.644685  664102 command_runner.go:130] > Version:  0.1.0
	I0130 21:37:21.644711  664102 command_runner.go:130] > RuntimeName:  cri-o
	I0130 21:37:21.644718  664102 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 21:37:21.644726  664102 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 21:37:21.645821  664102 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 21:37:21.645894  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:37:21.689239  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:37:21.689267  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:37:21.689279  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:37:21.689286  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:37:21.689296  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:37:21.689304  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:37:21.689310  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:37:21.689317  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:37:21.689328  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:37:21.689339  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:37:21.689350  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:37:21.689357  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:37:21.689455  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:37:21.735139  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:37:21.735168  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:37:21.735182  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:37:21.735191  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:37:21.735200  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:37:21.735208  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:37:21.735216  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:37:21.735224  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:37:21.735233  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:37:21.735246  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:37:21.735253  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:37:21.735260  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:37:21.738664  664102 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 21:37:21.740285  664102 out.go:177]   - env NO_PROXY=192.168.39.174
	I0130 21:37:21.741570  664102 main.go:141] libmachine: (multinode-721181-m02) Calling .GetIP
	I0130 21:37:21.744237  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:21.744610  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:37:21.744638  664102 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:37:21.744841  664102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 21:37:21.749091  664102 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0130 21:37:21.749144  664102 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181 for IP: 192.168.39.69
	I0130 21:37:21.749162  664102 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:37:21.749325  664102 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 21:37:21.749361  664102 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 21:37:21.749373  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 21:37:21.749388  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 21:37:21.749403  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 21:37:21.749415  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 21:37:21.749484  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 21:37:21.749521  664102 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 21:37:21.749533  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 21:37:21.749556  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 21:37:21.749580  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 21:37:21.749602  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 21:37:21.749643  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:37:21.749671  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:37:21.749683  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem -> /usr/share/ca-certificates/647718.pem
	I0130 21:37:21.749697  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /usr/share/ca-certificates/6477182.pem
	I0130 21:37:21.750178  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 21:37:21.776955  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 21:37:21.802900  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 21:37:21.827794  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 21:37:21.854145  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 21:37:21.880634  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 21:37:21.905759  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 21:37:21.930776  664102 ssh_runner.go:195] Run: openssl version
	I0130 21:37:21.936177  664102 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 21:37:21.936368  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 21:37:21.946057  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:37:21.950500  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:37:21.950525  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:37:21.950561  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:37:21.955515  664102 command_runner.go:130] > b5213941
	I0130 21:37:21.955744  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 21:37:21.964857  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 21:37:21.974343  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 21:37:21.978656  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:37:21.978676  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:37:21.978711  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 21:37:21.983924  664102 command_runner.go:130] > 51391683
	I0130 21:37:21.983977  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 21:37:21.991553  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 21:37:22.001067  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 21:37:22.005112  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:37:22.005243  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:37:22.005288  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 21:37:22.010089  664102 command_runner.go:130] > 3ec20f2e
	I0130 21:37:22.010297  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 21:37:22.018570  664102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 21:37:22.022230  664102 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:37:22.022477  664102 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:37:22.022583  664102 ssh_runner.go:195] Run: crio config
	I0130 21:37:22.071374  664102 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 21:37:22.071399  664102 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 21:37:22.071406  664102 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 21:37:22.071410  664102 command_runner.go:130] > #
	I0130 21:37:22.071418  664102 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 21:37:22.071426  664102 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 21:37:22.071436  664102 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 21:37:22.071448  664102 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 21:37:22.071453  664102 command_runner.go:130] > # reload'.
	I0130 21:37:22.071464  664102 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 21:37:22.071475  664102 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 21:37:22.071486  664102 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 21:37:22.071492  664102 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 21:37:22.071496  664102 command_runner.go:130] > [crio]
	I0130 21:37:22.071504  664102 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 21:37:22.071514  664102 command_runner.go:130] > # containers images, in this directory.
	I0130 21:37:22.071525  664102 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 21:37:22.071547  664102 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 21:37:22.071562  664102 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 21:37:22.071574  664102 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 21:37:22.071585  664102 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 21:37:22.071596  664102 command_runner.go:130] > storage_driver = "overlay"
	I0130 21:37:22.071608  664102 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 21:37:22.071621  664102 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 21:37:22.071632  664102 command_runner.go:130] > storage_option = [
	I0130 21:37:22.071645  664102 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 21:37:22.071654  664102 command_runner.go:130] > ]
	I0130 21:37:22.071666  664102 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 21:37:22.071680  664102 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 21:37:22.071688  664102 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 21:37:22.071696  664102 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 21:37:22.071707  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 21:37:22.071722  664102 command_runner.go:130] > # always happen on a node reboot
	I0130 21:37:22.071735  664102 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 21:37:22.071746  664102 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 21:37:22.071760  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 21:37:22.071780  664102 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 21:37:22.071792  664102 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 21:37:22.071806  664102 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 21:37:22.071823  664102 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 21:37:22.071834  664102 command_runner.go:130] > # internal_wipe = true
	I0130 21:37:22.071845  664102 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 21:37:22.071859  664102 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 21:37:22.071873  664102 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 21:37:22.071885  664102 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 21:37:22.071896  664102 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 21:37:22.071906  664102 command_runner.go:130] > [crio.api]
	I0130 21:37:22.071916  664102 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 21:37:22.071931  664102 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 21:37:22.071944  664102 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 21:37:22.071956  664102 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 21:37:22.071971  664102 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 21:37:22.071985  664102 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 21:37:22.071992  664102 command_runner.go:130] > # stream_port = "0"
	I0130 21:37:22.072009  664102 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 21:37:22.072017  664102 command_runner.go:130] > # stream_enable_tls = false
	I0130 21:37:22.072029  664102 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 21:37:22.072040  664102 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 21:37:22.072050  664102 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 21:37:22.072062  664102 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 21:37:22.072071  664102 command_runner.go:130] > # minutes.
	I0130 21:37:22.072078  664102 command_runner.go:130] > # stream_tls_cert = ""
	I0130 21:37:22.072090  664102 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 21:37:22.072107  664102 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 21:37:22.072116  664102 command_runner.go:130] > # stream_tls_key = ""
	I0130 21:37:22.072129  664102 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 21:37:22.072143  664102 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 21:37:22.072154  664102 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 21:37:22.072164  664102 command_runner.go:130] > # stream_tls_ca = ""
	I0130 21:37:22.072171  664102 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:37:22.072179  664102 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 21:37:22.072186  664102 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:37:22.072194  664102 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 21:37:22.072214  664102 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 21:37:22.072228  664102 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 21:37:22.072238  664102 command_runner.go:130] > [crio.runtime]
	I0130 21:37:22.072252  664102 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 21:37:22.072267  664102 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 21:37:22.072278  664102 command_runner.go:130] > # "nofile=1024:2048"
	I0130 21:37:22.072289  664102 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 21:37:22.072299  664102 command_runner.go:130] > # default_ulimits = [
	I0130 21:37:22.072305  664102 command_runner.go:130] > # ]
	I0130 21:37:22.072314  664102 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 21:37:22.072319  664102 command_runner.go:130] > # no_pivot = false
	I0130 21:37:22.072326  664102 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 21:37:22.072333  664102 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 21:37:22.072341  664102 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 21:37:22.072347  664102 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 21:37:22.072354  664102 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 21:37:22.072361  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:37:22.072368  664102 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 21:37:22.072372  664102 command_runner.go:130] > # Cgroup setting for conmon
	I0130 21:37:22.072379  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 21:37:22.072386  664102 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 21:37:22.072395  664102 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 21:37:22.072402  664102 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 21:37:22.072410  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:37:22.072419  664102 command_runner.go:130] > conmon_env = [
	I0130 21:37:22.072429  664102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 21:37:22.072439  664102 command_runner.go:130] > ]
	I0130 21:37:22.072449  664102 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 21:37:22.072461  664102 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 21:37:22.072471  664102 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 21:37:22.072481  664102 command_runner.go:130] > # default_env = [
	I0130 21:37:22.072488  664102 command_runner.go:130] > # ]
	I0130 21:37:22.072502  664102 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 21:37:22.072512  664102 command_runner.go:130] > # selinux = false
	I0130 21:37:22.072524  664102 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 21:37:22.072537  664102 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 21:37:22.072549  664102 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 21:37:22.072557  664102 command_runner.go:130] > # seccomp_profile = ""
	I0130 21:37:22.072570  664102 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 21:37:22.072584  664102 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 21:37:22.072597  664102 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 21:37:22.072608  664102 command_runner.go:130] > # which might increase security.
	I0130 21:37:22.072619  664102 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 21:37:22.072631  664102 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 21:37:22.072645  664102 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 21:37:22.072660  664102 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 21:37:22.072671  664102 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 21:37:22.072683  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:37:22.072692  664102 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 21:37:22.072706  664102 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 21:37:22.072715  664102 command_runner.go:130] > # the cgroup blockio controller.
	I0130 21:37:22.072725  664102 command_runner.go:130] > # blockio_config_file = ""
	I0130 21:37:22.072736  664102 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 21:37:22.072749  664102 command_runner.go:130] > # irqbalance daemon.
	I0130 21:37:22.072760  664102 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 21:37:22.072774  664102 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 21:37:22.072788  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:37:22.072798  664102 command_runner.go:130] > # rdt_config_file = ""
	I0130 21:37:22.072808  664102 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 21:37:22.072818  664102 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 21:37:22.072829  664102 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 21:37:22.072839  664102 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 21:37:22.072850  664102 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 21:37:22.072865  664102 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 21:37:22.072875  664102 command_runner.go:130] > # will be added.
	I0130 21:37:22.072883  664102 command_runner.go:130] > # default_capabilities = [
	I0130 21:37:22.072893  664102 command_runner.go:130] > # 	"CHOWN",
	I0130 21:37:22.072901  664102 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 21:37:22.072911  664102 command_runner.go:130] > # 	"FSETID",
	I0130 21:37:22.072920  664102 command_runner.go:130] > # 	"FOWNER",
	I0130 21:37:22.072929  664102 command_runner.go:130] > # 	"SETGID",
	I0130 21:37:22.072939  664102 command_runner.go:130] > # 	"SETUID",
	I0130 21:37:22.072946  664102 command_runner.go:130] > # 	"SETPCAP",
	I0130 21:37:22.072957  664102 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 21:37:22.072963  664102 command_runner.go:130] > # 	"KILL",
	I0130 21:37:22.072972  664102 command_runner.go:130] > # ]
	I0130 21:37:22.072983  664102 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 21:37:22.073001  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:37:22.073013  664102 command_runner.go:130] > # default_sysctls = [
	I0130 21:37:22.073022  664102 command_runner.go:130] > # ]
	I0130 21:37:22.073033  664102 command_runner.go:130] > # List of devices on the host that a
	I0130 21:37:22.073046  664102 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 21:37:22.073058  664102 command_runner.go:130] > # allowed_devices = [
	I0130 21:37:22.073066  664102 command_runner.go:130] > # 	"/dev/fuse",
	I0130 21:37:22.073075  664102 command_runner.go:130] > # ]
	I0130 21:37:22.073084  664102 command_runner.go:130] > # List of additional devices. specified as
	I0130 21:37:22.073100  664102 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 21:37:22.073112  664102 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 21:37:22.073137  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:37:22.073148  664102 command_runner.go:130] > # additional_devices = [
	I0130 21:37:22.073155  664102 command_runner.go:130] > # ]
	I0130 21:37:22.073167  664102 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 21:37:22.073174  664102 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 21:37:22.073184  664102 command_runner.go:130] > # 	"/etc/cdi",
	I0130 21:37:22.073191  664102 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 21:37:22.073199  664102 command_runner.go:130] > # ]
	I0130 21:37:22.073209  664102 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 21:37:22.073223  664102 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 21:37:22.073234  664102 command_runner.go:130] > # Defaults to false.
	I0130 21:37:22.073246  664102 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 21:37:22.073258  664102 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 21:37:22.073270  664102 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 21:37:22.073278  664102 command_runner.go:130] > # hooks_dir = [
	I0130 21:37:22.073284  664102 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 21:37:22.073290  664102 command_runner.go:130] > # ]
	I0130 21:37:22.073296  664102 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 21:37:22.073305  664102 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 21:37:22.073310  664102 command_runner.go:130] > # its default mounts from the following two files:
	I0130 21:37:22.073317  664102 command_runner.go:130] > #
	I0130 21:37:22.073328  664102 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 21:37:22.073343  664102 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 21:37:22.073355  664102 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 21:37:22.073365  664102 command_runner.go:130] > #
	I0130 21:37:22.073376  664102 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 21:37:22.073390  664102 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 21:37:22.073404  664102 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 21:37:22.073415  664102 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 21:37:22.073421  664102 command_runner.go:130] > #
	I0130 21:37:22.073427  664102 command_runner.go:130] > # default_mounts_file = ""
	I0130 21:37:22.073439  664102 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 21:37:22.073452  664102 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 21:37:22.073480  664102 command_runner.go:130] > pids_limit = 1024
	I0130 21:37:22.073492  664102 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 21:37:22.073505  664102 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 21:37:22.073519  664102 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 21:37:22.073534  664102 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 21:37:22.073543  664102 command_runner.go:130] > # log_size_max = -1
	I0130 21:37:22.073554  664102 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 21:37:22.073564  664102 command_runner.go:130] > # log_to_journald = false
	I0130 21:37:22.073574  664102 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 21:37:22.073584  664102 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 21:37:22.073596  664102 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 21:37:22.073606  664102 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 21:37:22.073617  664102 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 21:37:22.073627  664102 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 21:37:22.073638  664102 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 21:37:22.073647  664102 command_runner.go:130] > # read_only = false
	I0130 21:37:22.073657  664102 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 21:37:22.073673  664102 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 21:37:22.073687  664102 command_runner.go:130] > # live configuration reload.
	I0130 21:37:22.073697  664102 command_runner.go:130] > # log_level = "info"
	I0130 21:37:22.073710  664102 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 21:37:22.073722  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:37:22.073730  664102 command_runner.go:130] > # log_filter = ""
	I0130 21:37:22.073743  664102 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 21:37:22.073757  664102 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 21:37:22.073767  664102 command_runner.go:130] > # separated by comma.
	I0130 21:37:22.073777  664102 command_runner.go:130] > # uid_mappings = ""
	I0130 21:37:22.073791  664102 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 21:37:22.073805  664102 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 21:37:22.073815  664102 command_runner.go:130] > # separated by comma.
	I0130 21:37:22.073822  664102 command_runner.go:130] > # gid_mappings = ""
	I0130 21:37:22.073834  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 21:37:22.073847  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:37:22.073860  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:37:22.073870  664102 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 21:37:22.073884  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 21:37:22.073898  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:37:22.073910  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:37:22.073918  664102 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 21:37:22.073930  664102 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 21:37:22.073943  664102 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 21:37:22.073956  664102 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 21:37:22.073966  664102 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 21:37:22.073979  664102 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 21:37:22.073991  664102 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 21:37:22.074006  664102 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 21:37:22.074015  664102 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 21:37:22.074025  664102 command_runner.go:130] > drop_infra_ctr = false
	I0130 21:37:22.074036  664102 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 21:37:22.074049  664102 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 21:37:22.074065  664102 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 21:37:22.074074  664102 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 21:37:22.074080  664102 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 21:37:22.074087  664102 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 21:37:22.074093  664102 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 21:37:22.074102  664102 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 21:37:22.074107  664102 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 21:37:22.074116  664102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 21:37:22.074122  664102 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 21:37:22.074131  664102 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 21:37:22.074135  664102 command_runner.go:130] > # default_runtime = "runc"
	I0130 21:37:22.074142  664102 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 21:37:22.074149  664102 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 21:37:22.074160  664102 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 21:37:22.074166  664102 command_runner.go:130] > # creation as a file is not desired either.
	I0130 21:37:22.074174  664102 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 21:37:22.074182  664102 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 21:37:22.074187  664102 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 21:37:22.074193  664102 command_runner.go:130] > # ]
	I0130 21:37:22.074199  664102 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 21:37:22.074208  664102 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 21:37:22.074216  664102 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 21:37:22.074225  664102 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 21:37:22.074231  664102 command_runner.go:130] > #
	I0130 21:37:22.074236  664102 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 21:37:22.074243  664102 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 21:37:22.074248  664102 command_runner.go:130] > #  runtime_type = "oci"
	I0130 21:37:22.074255  664102 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 21:37:22.074262  664102 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 21:37:22.074268  664102 command_runner.go:130] > #  allowed_annotations = []
	I0130 21:37:22.074272  664102 command_runner.go:130] > # Where:
	I0130 21:37:22.074279  664102 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 21:37:22.074287  664102 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 21:37:22.074295  664102 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 21:37:22.074302  664102 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 21:37:22.074307  664102 command_runner.go:130] > #   in $PATH.
	I0130 21:37:22.074314  664102 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 21:37:22.074321  664102 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 21:37:22.074327  664102 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 21:37:22.074334  664102 command_runner.go:130] > #   state.
	I0130 21:37:22.074341  664102 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 21:37:22.074349  664102 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 21:37:22.074355  664102 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 21:37:22.074363  664102 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 21:37:22.074370  664102 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 21:37:22.074379  664102 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 21:37:22.074383  664102 command_runner.go:130] > #   The currently recognized values are:
	I0130 21:37:22.074393  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 21:37:22.074400  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 21:37:22.074408  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 21:37:22.074415  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 21:37:22.074424  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 21:37:22.074433  664102 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 21:37:22.074441  664102 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 21:37:22.074448  664102 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 21:37:22.074455  664102 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 21:37:22.074459  664102 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 21:37:22.074466  664102 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 21:37:22.074475  664102 command_runner.go:130] > runtime_type = "oci"
	I0130 21:37:22.074481  664102 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 21:37:22.074486  664102 command_runner.go:130] > runtime_config_path = ""
	I0130 21:37:22.074492  664102 command_runner.go:130] > monitor_path = ""
	I0130 21:37:22.074496  664102 command_runner.go:130] > monitor_cgroup = ""
	I0130 21:37:22.074503  664102 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 21:37:22.074509  664102 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 21:37:22.074515  664102 command_runner.go:130] > # running containers
	I0130 21:37:22.074520  664102 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 21:37:22.074528  664102 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 21:37:22.074557  664102 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 21:37:22.074565  664102 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 21:37:22.074571  664102 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 21:37:22.074578  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 21:37:22.074583  664102 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 21:37:22.074590  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 21:37:22.074595  664102 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 21:37:22.074602  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 21:37:22.074609  664102 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 21:37:22.074617  664102 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 21:37:22.074625  664102 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 21:37:22.074635  664102 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 21:37:22.074650  664102 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 21:37:22.074662  664102 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 21:37:22.074679  664102 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 21:37:22.074695  664102 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 21:37:22.074707  664102 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 21:37:22.074721  664102 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 21:37:22.074731  664102 command_runner.go:130] > # Example:
	I0130 21:37:22.074742  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 21:37:22.074754  664102 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 21:37:22.074764  664102 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 21:37:22.074776  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 21:37:22.074785  664102 command_runner.go:130] > # cpuset = 0
	I0130 21:37:22.074793  664102 command_runner.go:130] > # cpushares = "0-1"
	I0130 21:37:22.074798  664102 command_runner.go:130] > # Where:
	I0130 21:37:22.074809  664102 command_runner.go:130] > # The workload name is workload-type.
	I0130 21:37:22.074823  664102 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 21:37:22.074836  664102 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 21:37:22.074849  664102 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 21:37:22.074864  664102 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 21:37:22.074873  664102 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 21:37:22.074877  664102 command_runner.go:130] > # 
	I0130 21:37:22.074886  664102 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 21:37:22.074892  664102 command_runner.go:130] > #
	I0130 21:37:22.074898  664102 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 21:37:22.074907  664102 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 21:37:22.074915  664102 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 21:37:22.074924  664102 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 21:37:22.074932  664102 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 21:37:22.074938  664102 command_runner.go:130] > [crio.image]
	I0130 21:37:22.074944  664102 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 21:37:22.074950  664102 command_runner.go:130] > # default_transport = "docker://"
	I0130 21:37:22.074957  664102 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 21:37:22.074966  664102 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:37:22.074976  664102 command_runner.go:130] > # global_auth_file = ""
	I0130 21:37:22.074981  664102 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 21:37:22.074988  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:37:22.074999  664102 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 21:37:22.075009  664102 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 21:37:22.075017  664102 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:37:22.075022  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:37:22.075029  664102 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 21:37:22.075035  664102 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 21:37:22.075046  664102 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 21:37:22.075059  664102 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 21:37:22.075072  664102 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 21:37:22.075083  664102 command_runner.go:130] > # pause_command = "/pause"
	I0130 21:37:22.075096  664102 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 21:37:22.075110  664102 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 21:37:22.075124  664102 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 21:37:22.075136  664102 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 21:37:22.075148  664102 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 21:37:22.075158  664102 command_runner.go:130] > # signature_policy = ""
	I0130 21:37:22.075169  664102 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 21:37:22.075182  664102 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 21:37:22.075192  664102 command_runner.go:130] > # changing them here.
	I0130 21:37:22.075202  664102 command_runner.go:130] > # insecure_registries = [
	I0130 21:37:22.075211  664102 command_runner.go:130] > # ]
	I0130 21:37:22.075220  664102 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 21:37:22.075228  664102 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 21:37:22.075235  664102 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 21:37:22.075240  664102 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 21:37:22.075246  664102 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 21:37:22.075253  664102 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 21:37:22.075259  664102 command_runner.go:130] > # CNI plugins.
	I0130 21:37:22.075263  664102 command_runner.go:130] > [crio.network]
	I0130 21:37:22.075271  664102 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 21:37:22.075277  664102 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 21:37:22.075283  664102 command_runner.go:130] > # cni_default_network = ""
	I0130 21:37:22.075290  664102 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 21:37:22.075298  664102 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 21:37:22.075303  664102 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 21:37:22.075310  664102 command_runner.go:130] > # plugin_dirs = [
	I0130 21:37:22.075314  664102 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 21:37:22.075320  664102 command_runner.go:130] > # ]
	I0130 21:37:22.075326  664102 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 21:37:22.075330  664102 command_runner.go:130] > [crio.metrics]
	I0130 21:37:22.075337  664102 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 21:37:22.075342  664102 command_runner.go:130] > enable_metrics = true
	I0130 21:37:22.075349  664102 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 21:37:22.075354  664102 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 21:37:22.075366  664102 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 21:37:22.075380  664102 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 21:37:22.075393  664102 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 21:37:22.075404  664102 command_runner.go:130] > # metrics_collectors = [
	I0130 21:37:22.075413  664102 command_runner.go:130] > # 	"operations",
	I0130 21:37:22.075421  664102 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 21:37:22.075426  664102 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 21:37:22.075432  664102 command_runner.go:130] > # 	"operations_errors",
	I0130 21:37:22.075437  664102 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 21:37:22.075443  664102 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 21:37:22.075448  664102 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 21:37:22.075458  664102 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 21:37:22.075468  664102 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 21:37:22.075479  664102 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 21:37:22.075487  664102 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 21:37:22.075497  664102 command_runner.go:130] > # 	"containers_oom_total",
	I0130 21:37:22.075506  664102 command_runner.go:130] > # 	"containers_oom",
	I0130 21:37:22.075514  664102 command_runner.go:130] > # 	"processes_defunct",
	I0130 21:37:22.075522  664102 command_runner.go:130] > # 	"operations_total",
	I0130 21:37:22.075530  664102 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 21:37:22.075542  664102 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 21:37:22.075553  664102 command_runner.go:130] > # 	"operations_errors_total",
	I0130 21:37:22.075563  664102 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 21:37:22.075572  664102 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 21:37:22.075584  664102 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 21:37:22.075594  664102 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 21:37:22.075604  664102 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 21:37:22.075611  664102 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 21:37:22.075615  664102 command_runner.go:130] > # ]
	I0130 21:37:22.075624  664102 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 21:37:22.075634  664102 command_runner.go:130] > # metrics_port = 9090
	I0130 21:37:22.075647  664102 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 21:37:22.075658  664102 command_runner.go:130] > # metrics_socket = ""
	I0130 21:37:22.075667  664102 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 21:37:22.075680  664102 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 21:37:22.075694  664102 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 21:37:22.075703  664102 command_runner.go:130] > # certificate on any modification event.
	I0130 21:37:22.075707  664102 command_runner.go:130] > # metrics_cert = ""
	I0130 21:37:22.075718  664102 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 21:37:22.075729  664102 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 21:37:22.075739  664102 command_runner.go:130] > # metrics_key = ""
	I0130 21:37:22.075752  664102 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 21:37:22.075762  664102 command_runner.go:130] > [crio.tracing]
	I0130 21:37:22.075772  664102 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 21:37:22.075782  664102 command_runner.go:130] > # enable_tracing = false
	I0130 21:37:22.075790  664102 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 21:37:22.075797  664102 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 21:37:22.075802  664102 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 21:37:22.075809  664102 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 21:37:22.075819  664102 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 21:37:22.075826  664102 command_runner.go:130] > [crio.stats]
	I0130 21:37:22.075841  664102 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 21:37:22.075853  664102 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 21:37:22.075864  664102 command_runner.go:130] > # stats_collection_period = 0
	I0130 21:37:22.075902  664102 command_runner.go:130] ! time="2024-01-30 21:37:22.059478039Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 21:37:22.075916  664102 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 21:37:22.076020  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:37:22.076035  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:37:22.076045  664102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 21:37:22.076067  664102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.69 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-721181 NodeName:multinode-721181-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 21:37:22.076194  664102 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-721181-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 21:37:22.076255  664102 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-721181-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 21:37:22.076307  664102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 21:37:22.084251  664102 command_runner.go:130] > kubeadm
	I0130 21:37:22.084271  664102 command_runner.go:130] > kubectl
	I0130 21:37:22.084278  664102 command_runner.go:130] > kubelet
	I0130 21:37:22.084442  664102 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 21:37:22.084512  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0130 21:37:22.092252  664102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0130 21:37:22.107550  664102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 21:37:22.122732  664102 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0130 21:37:22.126194  664102 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0130 21:37:22.126350  664102 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:37:22.126692  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:37:22.126808  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:37:22.126850  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:37:22.143301  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46735
	I0130 21:37:22.143705  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:37:22.144164  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:37:22.144193  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:37:22.144508  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:37:22.144714  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:37:22.144907  664102 start.go:304] JoinCluster: &{Name:multinode-721181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:37:22.145065  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0130 21:37:22.145088  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:37:22.147813  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:37:22.148251  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:37:22.148290  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:37:22.148432  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:37:22.148593  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:37:22.148726  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:37:22.148830  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:37:22.323231  664102 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token olse2y.kflknki5js4h73vm --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 21:37:22.328009  664102 start.go:317] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 21:37:22.328083  664102 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:37:22.328537  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:37:22.328591  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:37:22.343342  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45655
	I0130 21:37:22.343805  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:37:22.344332  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:37:22.344360  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:37:22.344703  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:37:22.344911  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:37:22.345131  664102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-721181-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0130 21:37:22.345152  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:37:22.347936  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:37:22.348452  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:37:22.348475  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:37:22.348643  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:37:22.348822  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:37:22.348960  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:37:22.349091  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:37:22.560665  664102 command_runner.go:130] > node/multinode-721181-m02 cordoned
	I0130 21:37:25.598357  664102 command_runner.go:130] > pod "busybox-5b5d89c9d6-9gv46" has DeletionTimestamp older than 1 seconds, skipping
	I0130 21:37:25.598385  664102 command_runner.go:130] > node/multinode-721181-m02 drained
	I0130 21:37:25.600085  664102 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0130 21:37:25.600103  664102 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-8thzp, kube-system/kube-proxy-s9pwd
	I0130 21:37:25.600129  664102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-721181-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.254968175s)
	I0130 21:37:25.600152  664102 node.go:108] successfully drained node "m02"
	I0130 21:37:25.600605  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:37:25.600946  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:37:25.601527  664102 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0130 21:37:25.601602  664102 round_trippers.go:463] DELETE https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:37:25.601613  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:25.601626  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:25.601640  664102 round_trippers.go:473]     Content-Type: application/json
	I0130 21:37:25.601652  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:25.615601  664102 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0130 21:37:25.615624  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:25.615635  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:25.615663  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:25.615672  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:25.615679  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:25.615687  664102 round_trippers.go:580]     Content-Length: 171
	I0130 21:37:25.615696  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:25 GMT
	I0130 21:37:25.615703  664102 round_trippers.go:580]     Audit-Id: 4958142f-5c08-400d-8d1d-c3b3f2d684be
	I0130 21:37:25.615746  664102 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-721181-m02","kind":"nodes","uid":"47058aff-0457-4267-b98b-c3be7d21f2dc"}}
	I0130 21:37:25.615783  664102 node.go:124] successfully deleted node "m02"
	I0130 21:37:25.615796  664102 start.go:321] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 21:37:25.615825  664102 start.go:325] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 21:37:25.615845  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token olse2y.kflknki5js4h73vm --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-721181-m02"
	I0130 21:37:25.682260  664102 command_runner.go:130] ! W0130 21:37:25.673713    2660 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0130 21:37:25.682302  664102 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0130 21:37:25.836638  664102 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0130 21:37:25.836682  664102 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0130 21:37:26.582895  664102 command_runner.go:130] > [preflight] Running pre-flight checks
	I0130 21:37:26.582931  664102 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0130 21:37:26.582946  664102 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0130 21:37:26.582959  664102 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 21:37:26.582971  664102 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 21:37:26.582991  664102 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 21:37:26.583002  664102 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0130 21:37:26.583011  664102 command_runner.go:130] > This node has joined the cluster:
	I0130 21:37:26.583021  664102 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0130 21:37:26.583035  664102 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0130 21:37:26.583049  664102 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0130 21:37:26.583094  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0130 21:37:26.829588  664102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=multinode-721181 minikube.k8s.io/updated_at=2024_01_30T21_37_26_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:37:26.927607  664102 command_runner.go:130] > node/multinode-721181-m02 labeled
	I0130 21:37:26.942618  664102 command_runner.go:130] > node/multinode-721181-m03 labeled
	I0130 21:37:26.944801  664102 start.go:306] JoinCluster complete in 4.799895948s
	I0130 21:37:26.944822  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:37:26.944829  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:37:26.944886  664102 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 21:37:26.951159  664102 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 21:37:26.951191  664102 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 21:37:26.951201  664102 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 21:37:26.951211  664102 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:37:26.951221  664102 command_runner.go:130] > Access: 2024-01-30 21:34:55.719579323 +0000
	I0130 21:37:26.951233  664102 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 21:37:26.951241  664102 command_runner.go:130] > Change: 2024-01-30 21:34:53.860579323 +0000
	I0130 21:37:26.951258  664102 command_runner.go:130] >  Birth: -
	I0130 21:37:26.951911  664102 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 21:37:26.951931  664102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 21:37:26.970135  664102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 21:37:27.340586  664102 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:37:27.344690  664102 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:37:27.348037  664102 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 21:37:27.358245  664102 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 21:37:27.361345  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:37:27.361593  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:37:27.361931  664102 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 21:37:27.361946  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.361955  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.361960  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.363754  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.363772  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.363777  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.363783  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.363793  664102 round_trippers.go:580]     Content-Length: 291
	I0130 21:37:27.363805  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.363818  664102 round_trippers.go:580]     Audit-Id: 7c21c559-c631-4db1-98d2-310b068830d3
	I0130 21:37:27.363828  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.363836  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.363861  664102 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f33652aa-ee2d-484a-8c79-9724e39fcaab","resourceVersion":"864","creationTimestamp":"2024-01-30T21:24:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 21:37:27.363959  664102 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-721181" context rescaled to 1 replicas
	I0130 21:37:27.363996  664102 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0130 21:37:27.366690  664102 out.go:177] * Verifying Kubernetes components...
	I0130 21:37:27.367970  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:37:27.383230  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:37:27.383437  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:37:27.383660  664102 node_ready.go:35] waiting up to 6m0s for node "multinode-721181-m02" to be "Ready" ...
	I0130 21:37:27.383740  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:37:27.383749  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.383757  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.383763  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.385993  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:27.386010  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.386016  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.386021  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.386026  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.386033  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.386038  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.386044  664102 round_trippers.go:580]     Audit-Id: 0cfd0206-7bbb-4cfc-8b26-14dcb7e98f24
	I0130 21:37:27.386403  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m02","uid":"be090718-b5cd-4d45-9ba2-6425fd24503e","resourceVersion":"1013","creationTimestamp":"2024-01-30T21:37:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_37_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:37:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 21:37:27.386731  664102 node_ready.go:49] node "multinode-721181-m02" has status "Ready":"True"
	I0130 21:37:27.386749  664102 node_ready.go:38] duration metric: took 3.073902ms waiting for node "multinode-721181-m02" to be "Ready" ...
	I0130 21:37:27.386760  664102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:37:27.386823  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:37:27.386834  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.386845  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.386857  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.393454  664102 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0130 21:37:27.393490  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.393500  664102 round_trippers.go:580]     Audit-Id: 7652c623-6f13-4b8d-bc21-41de3c72e58d
	I0130 21:37:27.393506  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.393512  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.393518  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.393524  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.393532  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.396400  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1020"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82238 chars]
	I0130 21:37:27.398750  664102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.398844  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:37:27.398852  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.398859  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.398865  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.401294  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:27.401309  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.401316  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.401321  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.401326  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.401331  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.401339  664102 round_trippers.go:580]     Audit-Id: 73c86702-61e6-4798-9dbd-859452b85b5b
	I0130 21:37:27.401346  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.402132  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 21:37:27.402598  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.402612  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.402620  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.402627  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.405670  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:37:27.405682  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.405688  664102 round_trippers.go:580]     Audit-Id: 078897d3-abbf-454d-8614-5febec8f1d92
	I0130 21:37:27.405693  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.405697  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.405702  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.405707  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.405714  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.405974  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:27.406228  664102 pod_ready.go:92] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:27.406241  664102 pod_ready.go:81] duration metric: took 7.471746ms waiting for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.406248  664102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.406293  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-721181
	I0130 21:37:27.406301  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.406309  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.406315  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.408185  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.408198  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.408204  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.408209  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.408214  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.408219  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.408224  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.408231  664102 round_trippers.go:580]     Audit-Id: 90f32bf6-cebf-4411-a385-a28d0c7ae5ce
	I0130 21:37:27.408419  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-721181","namespace":"kube-system","uid":"83f20d3f-5604-4e3c-a7c8-b38a9b20c035","resourceVersion":"838","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.mirror":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.seen":"2024-01-30T21:24:57.236042745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 21:37:27.408827  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.408846  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.408857  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.408867  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.410582  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.410595  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.410601  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.410606  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.410611  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.410616  664102 round_trippers.go:580]     Audit-Id: 60121756-debb-4fcb-9245-a9eea95818f1
	I0130 21:37:27.410623  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.410631  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.410832  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:27.411077  664102 pod_ready.go:92] pod "etcd-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:27.411088  664102 pod_ready.go:81] duration metric: took 4.834669ms waiting for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.411103  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.411140  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-721181
	I0130 21:37:27.411156  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.411163  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.411169  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.413070  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.413083  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.413088  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.413093  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.413098  664102 round_trippers.go:580]     Audit-Id: cd46627f-6dfc-4595-b15b-80d50cbb8071
	I0130 21:37:27.413105  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.413110  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.413115  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.413342  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-721181","namespace":"kube-system","uid":"fbcc53e1-4691-4473-b215-2cb6daeaf321","resourceVersion":"850","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.mirror":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.seen":"2024-01-30T21:24:57.236043778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 21:37:27.413718  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.413732  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.413739  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.413744  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.415589  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.415600  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.415606  664102 round_trippers.go:580]     Audit-Id: a6bbcab2-dc34-4957-8cfb-0e0fb0814755
	I0130 21:37:27.415611  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.415616  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.415621  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.415626  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.415631  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.415819  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:27.416175  664102 pod_ready.go:92] pod "kube-apiserver-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:27.416191  664102 pod_ready.go:81] duration metric: took 5.081747ms waiting for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.416203  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.416260  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-721181
	I0130 21:37:27.416269  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.416279  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.416293  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.418198  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.418211  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.418217  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.418223  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.418228  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.418233  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.418238  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.418244  664102 round_trippers.go:580]     Audit-Id: c1cac46d-1285-43be-b0ce-4018a5cbb140
	I0130 21:37:27.418704  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-721181","namespace":"kube-system","uid":"de8beec4-5cad-4405-b856-7475b95559ba","resourceVersion":"837","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.mirror":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.seen":"2024-01-30T21:24:57.236037857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 21:37:27.419041  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.419058  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.419065  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.419071  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.420746  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:37:27.420758  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.420764  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.420769  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.420774  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.420779  664102 round_trippers.go:580]     Audit-Id: d9fe0258-04cb-4022-a42d-96c5e84b3946
	I0130 21:37:27.420784  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.420789  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.421060  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:27.421398  664102 pod_ready.go:92] pod "kube-controller-manager-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:27.421414  664102 pod_ready.go:81] duration metric: took 5.202777ms waiting for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.421422  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.584816  664102 request.go:629] Waited for 163.328581ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:37:27.584909  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:37:27.584916  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.584927  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.584938  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.589761  664102 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0130 21:37:27.589788  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.589802  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.589812  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.589820  664102 round_trippers.go:580]     Audit-Id: be45233b-1267-4df4-a491-77c4bdb7a91c
	I0130 21:37:27.589829  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.589838  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.589846  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.590402  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-49rq4","generateName":"kube-proxy-","namespace":"kube-system","uid":"63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3","resourceVersion":"812","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 21:37:27.784367  664102 request.go:629] Waited for 193.404058ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.784469  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:27.784479  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.784492  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.784511  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.787595  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:37:27.787621  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.787632  664102 round_trippers.go:580]     Audit-Id: cb0432a0-6f3c-4832-a75b-b059fa0503f0
	I0130 21:37:27.787641  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.787649  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.787659  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.787672  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.787682  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.788095  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:27.788468  664102 pod_ready.go:92] pod "kube-proxy-49rq4" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:27.788494  664102 pod_ready.go:81] duration metric: took 367.064397ms waiting for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.788510  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:27.984463  664102 request.go:629] Waited for 195.871249ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:37:27.984526  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:37:27.984537  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:27.984549  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:27.984562  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:27.987996  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:37:27.988018  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:27.988024  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:27.988030  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:27 GMT
	I0130 21:37:27.988035  664102 round_trippers.go:580]     Audit-Id: fed84e8a-3a26-4823-9558-1291eea0a8ed
	I0130 21:37:27.988040  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:27.988046  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:27.988054  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:27.988357  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwg96","generateName":"kube-proxy-","namespace":"kube-system","uid":"68cc319c-45c4-4a65-9712-d4e419acd7d6","resourceVersion":"681","creationTimestamp":"2024-01-30T21:26:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0130 21:37:28.184116  664102 request.go:629] Waited for 195.287661ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:37:28.184202  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:37:28.184214  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:28.184227  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:28.184240  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:28.186910  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:28.186936  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:28.186946  664102 round_trippers.go:580]     Audit-Id: a0dfce1a-9442-436e-bf43-da52492d5d25
	I0130 21:37:28.186955  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:28.186963  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:28.186972  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:28.186991  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:28.186998  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:28 GMT
	I0130 21:37:28.187824  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m03","uid":"f8b13ad8-e768-466a-b155-3ab55af16d96","resourceVersion":"1014","creationTimestamp":"2024-01-30T21:27:32Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_37_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:27:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annota
tions":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detac [truncated 3966 chars]
	I0130 21:37:28.188180  664102 pod_ready.go:92] pod "kube-proxy-lwg96" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:28.188201  664102 pod_ready.go:81] duration metric: took 399.678071ms waiting for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:28.188210  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:28.384288  664102 request.go:629] Waited for 195.995913ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:37:28.384386  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:37:28.384392  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:28.384400  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:28.384407  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:28.387249  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:28.387274  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:28.387283  664102 round_trippers.go:580]     Audit-Id: efe1615f-38cf-4d55-a55a-8d50000bc18f
	I0130 21:37:28.387291  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:28.387299  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:28.387307  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:28.387313  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:28.387320  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:28 GMT
	I0130 21:37:28.387787  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s9pwd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6594579-7b2f-4ab5-b7f2-0b176bad1705","resourceVersion":"1032","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0130 21:37:28.584635  664102 request.go:629] Waited for 196.35984ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:37:28.584706  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:37:28.584711  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:28.584719  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:28.584725  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:28.586785  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:28.586806  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:28.586813  664102 round_trippers.go:580]     Audit-Id: 23ecc434-7b36-4d4a-bf54-5a63cfc048c4
	I0130 21:37:28.586819  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:28.586825  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:28.586834  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:28.586842  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:28.586853  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:28 GMT
	I0130 21:37:28.587341  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m02","uid":"be090718-b5cd-4d45-9ba2-6425fd24503e","resourceVersion":"1013","creationTimestamp":"2024-01-30T21:37:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_37_26_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:37:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 21:37:28.587640  664102 pod_ready.go:92] pod "kube-proxy-s9pwd" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:28.587657  664102 pod_ready.go:81] duration metric: took 399.441302ms waiting for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:28.587667  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:28.784242  664102 request.go:629] Waited for 196.47649ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:37:28.784315  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:37:28.784325  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:28.784337  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:28.784350  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:28.787493  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:37:28.787520  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:28.787528  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:28 GMT
	I0130 21:37:28.787534  664102 round_trippers.go:580]     Audit-Id: 9bf9cc13-bdcc-4dac-ae25-8dfd54effed3
	I0130 21:37:28.787547  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:28.787558  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:28.787567  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:28.787578  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:28.787755  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-721181","namespace":"kube-system","uid":"d7e4675b-0e8c-46de-9b39-435d25004a88","resourceVersion":"852","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"48930a2236670664c600a427fcb648de","kubernetes.io/config.mirror":"48930a2236670664c600a427fcb648de","kubernetes.io/config.seen":"2024-01-30T21:24:57.236041601Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 21:37:28.984616  664102 request.go:629] Waited for 196.42885ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:28.984730  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:37:28.984747  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:28.984759  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:28.984770  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:28.987425  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:37:28.987445  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:28.987457  664102 round_trippers.go:580]     Audit-Id: d340a5ac-db71-4dc9-bd62-db0d2e594b8b
	I0130 21:37:28.987463  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:28.987469  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:28.987474  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:28.987479  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:28.987484  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:28 GMT
	I0130 21:37:28.987797  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:37:28.988230  664102 pod_ready.go:92] pod "kube-scheduler-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:37:28.988252  664102 pod_ready.go:81] duration metric: took 400.57546ms waiting for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:37:28.988262  664102 pod_ready.go:38] duration metric: took 1.601487833s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:37:28.988280  664102 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:37:28.988330  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:37:29.003863  664102 system_svc.go:56] duration metric: took 15.575232ms WaitForService to wait for kubelet.
	I0130 21:37:29.003895  664102 kubeadm.go:581] duration metric: took 1.639868939s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:37:29.003934  664102 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:37:29.184497  664102 request.go:629] Waited for 180.448195ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0130 21:37:29.184574  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0130 21:37:29.184584  664102 round_trippers.go:469] Request Headers:
	I0130 21:37:29.184596  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:37:29.184610  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:37:29.187724  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:37:29.187754  664102 round_trippers.go:577] Response Headers:
	I0130 21:37:29.187764  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:37:29.187773  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:37:29.187780  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:37:29.187787  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:37:29 GMT
	I0130 21:37:29.187794  664102 round_trippers.go:580]     Audit-Id: 0bbab4b5-ee36-43d0-b7b6-4554cbaf7198
	I0130 21:37:29.187801  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:37:29.188510  664102 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1035"},"items":[{"metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16210 chars]
	I0130 21:37:29.189107  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:37:29.189127  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:37:29.189172  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:37:29.189179  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:37:29.189185  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:37:29.189192  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:37:29.189199  664102 node_conditions.go:105] duration metric: took 185.25599ms to run NodePressure ...
	I0130 21:37:29.189213  664102 start.go:228] waiting for startup goroutines ...
	I0130 21:37:29.189251  664102 start.go:242] writing updated cluster config ...
	I0130 21:37:29.189731  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:37:29.189824  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:37:29.192867  664102 out.go:177] * Starting worker node multinode-721181-m03 in cluster multinode-721181
	I0130 21:37:29.194205  664102 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:37:29.194232  664102 cache.go:56] Caching tarball of preloaded images
	I0130 21:37:29.194326  664102 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 21:37:29.194337  664102 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 21:37:29.194437  664102 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/config.json ...
	I0130 21:37:29.194605  664102 start.go:365] acquiring machines lock for multinode-721181-m03: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 21:37:29.194647  664102 start.go:369] acquired machines lock for "multinode-721181-m03" in 23.64µs
	I0130 21:37:29.194665  664102 start.go:96] Skipping create...Using existing machine configuration
	I0130 21:37:29.194674  664102 fix.go:54] fixHost starting: m03
	I0130 21:37:29.194914  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:37:29.194948  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:37:29.209714  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40225
	I0130 21:37:29.210192  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:37:29.210684  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:37:29.210705  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:37:29.210976  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:37:29.211191  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:37:29.211384  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetState
	I0130 21:37:29.213090  664102 fix.go:102] recreateIfNeeded on multinode-721181-m03: state=Running err=<nil>
	W0130 21:37:29.213108  664102 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 21:37:29.214830  664102 out.go:177] * Updating the running kvm2 "multinode-721181-m03" VM ...
	I0130 21:37:29.215997  664102 machine.go:88] provisioning docker machine ...
	I0130 21:37:29.216019  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:37:29.216220  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetMachineName
	I0130 21:37:29.216390  664102 buildroot.go:166] provisioning hostname "multinode-721181-m03"
	I0130 21:37:29.216406  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetMachineName
	I0130 21:37:29.216542  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:37:29.218863  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.219290  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.219319  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.219441  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:37:29.219619  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.219869  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.220099  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:37:29.220269  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:37:29.220699  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0130 21:37:29.220718  664102 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-721181-m03 && echo "multinode-721181-m03" | sudo tee /etc/hostname
	I0130 21:37:29.353492  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-721181-m03
	
	I0130 21:37:29.353532  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:37:29.356638  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.356980  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.357006  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.357230  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:37:29.357454  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.357676  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.357828  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:37:29.358012  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:37:29.358403  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0130 21:37:29.358430  664102 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-721181-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-721181-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-721181-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 21:37:29.474119  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 21:37:29.474150  664102 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 21:37:29.474169  664102 buildroot.go:174] setting up certificates
	I0130 21:37:29.474179  664102 provision.go:83] configureAuth start
	I0130 21:37:29.474188  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetMachineName
	I0130 21:37:29.474490  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetIP
	I0130 21:37:29.477035  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.477361  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.477387  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.477543  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:37:29.479532  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.479841  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.479867  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.479971  664102 provision.go:138] copyHostCerts
	I0130 21:37:29.480005  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:37:29.480038  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 21:37:29.480047  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 21:37:29.480111  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 21:37:29.480180  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:37:29.480198  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 21:37:29.480202  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 21:37:29.480228  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 21:37:29.480276  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:37:29.480295  664102 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 21:37:29.480302  664102 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 21:37:29.480324  664102 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 21:37:29.480381  664102 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.multinode-721181-m03 san=[192.168.39.218 192.168.39.218 localhost 127.0.0.1 minikube multinode-721181-m03]
	I0130 21:37:29.630152  664102 provision.go:172] copyRemoteCerts
	I0130 21:37:29.630212  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 21:37:29.630235  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:37:29.632868  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.633221  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.633257  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.633413  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:37:29.633629  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.633775  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:37:29.633970  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m03/id_rsa Username:docker}
	I0130 21:37:29.722008  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0130 21:37:29.722089  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 21:37:29.744400  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0130 21:37:29.744473  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0130 21:37:29.766843  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0130 21:37:29.766899  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 21:37:29.790204  664102 provision.go:86] duration metric: configureAuth took 316.012865ms
	I0130 21:37:29.790226  664102 buildroot.go:189] setting minikube options for container-runtime
	I0130 21:37:29.790428  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:37:29.790518  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:37:29.793249  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.793642  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:37:29.793667  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:37:29.793858  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:37:29.794064  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.794228  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:37:29.794357  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:37:29.794515  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:37:29.794853  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0130 21:37:29.794875  664102 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 21:39:00.291624  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 21:39:00.291676  664102 machine.go:91] provisioned docker machine in 1m31.075659082s
	I0130 21:39:00.291694  664102 start.go:300] post-start starting for "multinode-721181-m03" (driver="kvm2")
	I0130 21:39:00.291709  664102 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 21:39:00.291741  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:39:00.292069  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 21:39:00.292099  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:39:00.294923  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.295296  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:00.295331  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.295455  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:39:00.295685  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:39:00.295858  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:39:00.296058  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m03/id_rsa Username:docker}
	I0130 21:39:00.387581  664102 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 21:39:00.391823  664102 command_runner.go:130] > NAME=Buildroot
	I0130 21:39:00.391855  664102 command_runner.go:130] > VERSION=2021.02.12-1-g19d536a-dirty
	I0130 21:39:00.391862  664102 command_runner.go:130] > ID=buildroot
	I0130 21:39:00.391868  664102 command_runner.go:130] > VERSION_ID=2021.02.12
	I0130 21:39:00.391872  664102 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0130 21:39:00.391909  664102 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 21:39:00.391922  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 21:39:00.391998  664102 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 21:39:00.392087  664102 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 21:39:00.392097  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /etc/ssl/certs/6477182.pem
	I0130 21:39:00.392181  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 21:39:00.400023  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:39:00.422604  664102 start.go:303] post-start completed in 130.895431ms
	I0130 21:39:00.422626  664102 fix.go:56] fixHost completed within 1m31.227952546s
	I0130 21:39:00.422652  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:39:00.425416  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.425893  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:00.425937  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.426150  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:39:00.426376  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:39:00.426518  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:39:00.426679  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:39:00.426829  664102 main.go:141] libmachine: Using SSH client type: native
	I0130 21:39:00.427154  664102 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.218 22 <nil> <nil>}
	I0130 21:39:00.427169  664102 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 21:39:00.542193  664102 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706650740.534141764
	
	I0130 21:39:00.542222  664102 fix.go:206] guest clock: 1706650740.534141764
	I0130 21:39:00.542230  664102 fix.go:219] Guest: 2024-01-30 21:39:00.534141764 +0000 UTC Remote: 2024-01-30 21:39:00.42263041 +0000 UTC m=+555.457364043 (delta=111.511354ms)
	I0130 21:39:00.542246  664102 fix.go:190] guest clock delta is within tolerance: 111.511354ms
	I0130 21:39:00.542251  664102 start.go:83] releasing machines lock for "multinode-721181-m03", held for 1m31.347595533s
	I0130 21:39:00.542273  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:39:00.542574  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetIP
	I0130 21:39:00.545407  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.545912  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:00.545937  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.547839  664102 out.go:177] * Found network options:
	I0130 21:39:00.549399  664102 out.go:177]   - NO_PROXY=192.168.39.174,192.168.39.69
	W0130 21:39:00.550808  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	W0130 21:39:00.550832  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 21:39:00.550845  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:39:00.551424  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:39:00.551638  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .DriverName
	I0130 21:39:00.551743  664102 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 21:39:00.551787  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	W0130 21:39:00.551872  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	W0130 21:39:00.551901  664102 proxy.go:119] fail to check proxy env: Error ip not in block
	I0130 21:39:00.551990  664102 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 21:39:00.552014  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHHostname
	I0130 21:39:00.554643  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.555014  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.555110  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:00.555140  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.555275  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:39:00.555444  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:00.555470  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:39:00.555479  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:00.555630  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHPort
	I0130 21:39:00.555636  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:39:00.555882  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHKeyPath
	I0130 21:39:00.555901  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m03/id_rsa Username:docker}
	I0130 21:39:00.556012  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetSSHUsername
	I0130 21:39:00.556130  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.218 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m03/id_rsa Username:docker}
	I0130 21:39:00.787112  664102 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0130 21:39:00.787121  664102 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0130 21:39:00.793342  664102 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0130 21:39:00.793694  664102 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 21:39:00.793765  664102 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 21:39:00.801772  664102 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0130 21:39:00.801788  664102 start.go:475] detecting cgroup driver to use...
	I0130 21:39:00.801838  664102 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 21:39:00.815329  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 21:39:00.826548  664102 docker.go:217] disabling cri-docker service (if available) ...
	I0130 21:39:00.826603  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 21:39:00.838686  664102 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 21:39:00.850852  664102 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 21:39:00.979400  664102 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 21:39:01.109753  664102 docker.go:233] disabling docker service ...
	I0130 21:39:01.109838  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 21:39:01.123246  664102 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 21:39:01.135439  664102 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 21:39:01.258844  664102 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 21:39:01.375098  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 21:39:01.386973  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 21:39:01.402941  664102 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0130 21:39:01.403315  664102 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 21:39:01.403365  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:39:01.412040  664102 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 21:39:01.412097  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:39:01.420722  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:39:01.429544  664102 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 21:39:01.438027  664102 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 21:39:01.446979  664102 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 21:39:01.454609  664102 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0130 21:39:01.454666  664102 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 21:39:01.462223  664102 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 21:39:01.585510  664102 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 21:39:02.080471  664102 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 21:39:02.080555  664102 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 21:39:02.085534  664102 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0130 21:39:02.085552  664102 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0130 21:39:02.085560  664102 command_runner.go:130] > Device: 16h/22d	Inode: 1170        Links: 1
	I0130 21:39:02.085567  664102 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:39:02.085575  664102 command_runner.go:130] > Access: 2024-01-30 21:39:02.031240198 +0000
	I0130 21:39:02.085581  664102 command_runner.go:130] > Modify: 2024-01-30 21:39:02.007238010 +0000
	I0130 21:39:02.085594  664102 command_runner.go:130] > Change: 2024-01-30 21:39:02.007238010 +0000
	I0130 21:39:02.085600  664102 command_runner.go:130] >  Birth: -
	I0130 21:39:02.085619  664102 start.go:543] Will wait 60s for crictl version
	I0130 21:39:02.085665  664102 ssh_runner.go:195] Run: which crictl
	I0130 21:39:02.089214  664102 command_runner.go:130] > /usr/bin/crictl
	I0130 21:39:02.089292  664102 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 21:39:02.131005  664102 command_runner.go:130] > Version:  0.1.0
	I0130 21:39:02.131083  664102 command_runner.go:130] > RuntimeName:  cri-o
	I0130 21:39:02.131302  664102 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0130 21:39:02.131386  664102 command_runner.go:130] > RuntimeApiVersion:  v1
	I0130 21:39:02.132863  664102 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 21:39:02.132931  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:39:02.186587  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:39:02.186612  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:39:02.186624  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:39:02.186631  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:39:02.186640  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:39:02.186647  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:39:02.186654  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:39:02.186661  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:39:02.186669  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:39:02.186682  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:39:02.186695  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:39:02.186702  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:39:02.186789  664102 ssh_runner.go:195] Run: crio --version
	I0130 21:39:02.228051  664102 command_runner.go:130] > crio version 1.24.1
	I0130 21:39:02.228085  664102 command_runner.go:130] > Version:          1.24.1
	I0130 21:39:02.228096  664102 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0130 21:39:02.228103  664102 command_runner.go:130] > GitTreeState:     dirty
	I0130 21:39:02.228110  664102 command_runner.go:130] > BuildDate:        2023-12-28T22:46:29Z
	I0130 21:39:02.228117  664102 command_runner.go:130] > GoVersion:        go1.19.9
	I0130 21:39:02.228123  664102 command_runner.go:130] > Compiler:         gc
	I0130 21:39:02.228129  664102 command_runner.go:130] > Platform:         linux/amd64
	I0130 21:39:02.228138  664102 command_runner.go:130] > Linkmode:         dynamic
	I0130 21:39:02.228150  664102 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0130 21:39:02.228162  664102 command_runner.go:130] > SeccompEnabled:   true
	I0130 21:39:02.228173  664102 command_runner.go:130] > AppArmorEnabled:  false
	I0130 21:39:02.231289  664102 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 21:39:02.232706  664102 out.go:177]   - env NO_PROXY=192.168.39.174
	I0130 21:39:02.234190  664102 out.go:177]   - env NO_PROXY=192.168.39.174,192.168.39.69
	I0130 21:39:02.235515  664102 main.go:141] libmachine: (multinode-721181-m03) Calling .GetIP
	I0130 21:39:02.238604  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:02.238985  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:ad:78", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:27:26 +0000 UTC Type:0 Mac:52:54:00:da:ad:78 Iaid: IPaddr:192.168.39.218 Prefix:24 Hostname:multinode-721181-m03 Clientid:01:52:54:00:da:ad:78}
	I0130 21:39:02.239019  664102 main.go:141] libmachine: (multinode-721181-m03) DBG | domain multinode-721181-m03 has defined IP address 192.168.39.218 and MAC address 52:54:00:da:ad:78 in network mk-multinode-721181
	I0130 21:39:02.239157  664102 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 21:39:02.243157  664102 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0130 21:39:02.243329  664102 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181 for IP: 192.168.39.218
	I0130 21:39:02.243357  664102 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:39:02.243513  664102 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 21:39:02.243552  664102 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 21:39:02.243565  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0130 21:39:02.243579  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0130 21:39:02.243591  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0130 21:39:02.243603  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0130 21:39:02.243650  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 21:39:02.243699  664102 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 21:39:02.243722  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 21:39:02.243757  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 21:39:02.243794  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 21:39:02.243828  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 21:39:02.243873  664102 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 21:39:02.243900  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem -> /usr/share/ca-certificates/647718.pem
	I0130 21:39:02.243914  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> /usr/share/ca-certificates/6477182.pem
	I0130 21:39:02.243926  664102 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:39:02.244308  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 21:39:02.269439  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 21:39:02.292948  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 21:39:02.315056  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 21:39:02.336908  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 21:39:02.359118  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 21:39:02.383627  664102 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 21:39:02.406184  664102 ssh_runner.go:195] Run: openssl version
	I0130 21:39:02.411654  664102 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0130 21:39:02.411723  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 21:39:02.420705  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 21:39:02.425018  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:39:02.425044  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 21:39:02.425073  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 21:39:02.430352  664102 command_runner.go:130] > 51391683
	I0130 21:39:02.430411  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 21:39:02.438189  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 21:39:02.447620  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 21:39:02.452110  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:39:02.452138  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 21:39:02.452169  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 21:39:02.457322  664102 command_runner.go:130] > 3ec20f2e
	I0130 21:39:02.457670  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 21:39:02.465956  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 21:39:02.475227  664102 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:39:02.479546  664102 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:39:02.479577  664102 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:39:02.479614  664102 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 21:39:02.484511  664102 command_runner.go:130] > b5213941
	I0130 21:39:02.484871  664102 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 21:39:02.493092  664102 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 21:39:02.496884  664102 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:39:02.496992  664102 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 21:39:02.497071  664102 ssh_runner.go:195] Run: crio config
	I0130 21:39:02.545868  664102 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0130 21:39:02.545899  664102 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0130 21:39:02.545906  664102 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0130 21:39:02.545910  664102 command_runner.go:130] > #
	I0130 21:39:02.545917  664102 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0130 21:39:02.545923  664102 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0130 21:39:02.545929  664102 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0130 21:39:02.545942  664102 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0130 21:39:02.545946  664102 command_runner.go:130] > # reload'.
	I0130 21:39:02.545954  664102 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0130 21:39:02.545961  664102 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0130 21:39:02.545971  664102 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0130 21:39:02.545977  664102 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0130 21:39:02.545984  664102 command_runner.go:130] > [crio]
	I0130 21:39:02.545996  664102 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0130 21:39:02.546007  664102 command_runner.go:130] > # containers images, in this directory.
	I0130 21:39:02.546015  664102 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0130 21:39:02.546032  664102 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0130 21:39:02.546043  664102 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0130 21:39:02.546055  664102 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0130 21:39:02.546069  664102 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0130 21:39:02.546080  664102 command_runner.go:130] > storage_driver = "overlay"
	I0130 21:39:02.546089  664102 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0130 21:39:02.546095  664102 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0130 21:39:02.546100  664102 command_runner.go:130] > storage_option = [
	I0130 21:39:02.546112  664102 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0130 21:39:02.546121  664102 command_runner.go:130] > ]
	I0130 21:39:02.546131  664102 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0130 21:39:02.546146  664102 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0130 21:39:02.546157  664102 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0130 21:39:02.546170  664102 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0130 21:39:02.546181  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0130 21:39:02.546191  664102 command_runner.go:130] > # always happen on a node reboot
	I0130 21:39:02.546202  664102 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0130 21:39:02.546216  664102 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0130 21:39:02.546228  664102 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0130 21:39:02.546245  664102 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0130 21:39:02.546257  664102 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0130 21:39:02.546270  664102 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0130 21:39:02.546289  664102 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0130 21:39:02.546300  664102 command_runner.go:130] > # internal_wipe = true
	I0130 21:39:02.546313  664102 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0130 21:39:02.546327  664102 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0130 21:39:02.546339  664102 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0130 21:39:02.546352  664102 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0130 21:39:02.546359  664102 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0130 21:39:02.546365  664102 command_runner.go:130] > [crio.api]
	I0130 21:39:02.546371  664102 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0130 21:39:02.546378  664102 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0130 21:39:02.546384  664102 command_runner.go:130] > # IP address on which the stream server will listen.
	I0130 21:39:02.546391  664102 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0130 21:39:02.546399  664102 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0130 21:39:02.546411  664102 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0130 21:39:02.546421  664102 command_runner.go:130] > # stream_port = "0"
	I0130 21:39:02.546430  664102 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0130 21:39:02.546441  664102 command_runner.go:130] > # stream_enable_tls = false
	I0130 21:39:02.546454  664102 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0130 21:39:02.546465  664102 command_runner.go:130] > # stream_idle_timeout = ""
	I0130 21:39:02.546474  664102 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0130 21:39:02.546486  664102 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0130 21:39:02.546494  664102 command_runner.go:130] > # minutes.
	I0130 21:39:02.546507  664102 command_runner.go:130] > # stream_tls_cert = ""
	I0130 21:39:02.546521  664102 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0130 21:39:02.546536  664102 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0130 21:39:02.546546  664102 command_runner.go:130] > # stream_tls_key = ""
	I0130 21:39:02.546555  664102 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0130 21:39:02.546568  664102 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0130 21:39:02.546580  664102 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0130 21:39:02.546590  664102 command_runner.go:130] > # stream_tls_ca = ""
	I0130 21:39:02.546606  664102 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:39:02.546618  664102 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0130 21:39:02.546632  664102 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0130 21:39:02.546643  664102 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0130 21:39:02.546661  664102 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0130 21:39:02.546678  664102 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0130 21:39:02.546685  664102 command_runner.go:130] > [crio.runtime]
	I0130 21:39:02.546694  664102 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0130 21:39:02.546706  664102 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0130 21:39:02.546716  664102 command_runner.go:130] > # "nofile=1024:2048"
	I0130 21:39:02.546729  664102 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0130 21:39:02.546738  664102 command_runner.go:130] > # default_ulimits = [
	I0130 21:39:02.546742  664102 command_runner.go:130] > # ]
	I0130 21:39:02.546754  664102 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0130 21:39:02.546763  664102 command_runner.go:130] > # no_pivot = false
	I0130 21:39:02.546773  664102 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0130 21:39:02.546786  664102 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0130 21:39:02.546797  664102 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0130 21:39:02.546806  664102 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0130 21:39:02.546817  664102 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0130 21:39:02.546831  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:39:02.546841  664102 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0130 21:39:02.546849  664102 command_runner.go:130] > # Cgroup setting for conmon
	I0130 21:39:02.546863  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0130 21:39:02.546874  664102 command_runner.go:130] > conmon_cgroup = "pod"
	I0130 21:39:02.546887  664102 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0130 21:39:02.546905  664102 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0130 21:39:02.546918  664102 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0130 21:39:02.546930  664102 command_runner.go:130] > conmon_env = [
	I0130 21:39:02.546942  664102 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0130 21:39:02.546951  664102 command_runner.go:130] > ]
	I0130 21:39:02.546961  664102 command_runner.go:130] > # Additional environment variables to set for all the
	I0130 21:39:02.546972  664102 command_runner.go:130] > # containers. These are overridden if set in the
	I0130 21:39:02.546986  664102 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0130 21:39:02.546996  664102 command_runner.go:130] > # default_env = [
	I0130 21:39:02.547003  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547016  664102 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0130 21:39:02.547026  664102 command_runner.go:130] > # selinux = false
	I0130 21:39:02.547037  664102 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0130 21:39:02.547051  664102 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0130 21:39:02.547063  664102 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0130 21:39:02.547073  664102 command_runner.go:130] > # seccomp_profile = ""
	I0130 21:39:02.547086  664102 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0130 21:39:02.547099  664102 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0130 21:39:02.547113  664102 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0130 21:39:02.547124  664102 command_runner.go:130] > # which might increase security.
	I0130 21:39:02.547133  664102 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0130 21:39:02.547146  664102 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0130 21:39:02.547157  664102 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0130 21:39:02.547169  664102 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0130 21:39:02.547183  664102 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0130 21:39:02.547196  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:39:02.547207  664102 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0130 21:39:02.547220  664102 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0130 21:39:02.547231  664102 command_runner.go:130] > # the cgroup blockio controller.
	I0130 21:39:02.547241  664102 command_runner.go:130] > # blockio_config_file = ""
	I0130 21:39:02.547250  664102 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0130 21:39:02.547258  664102 command_runner.go:130] > # irqbalance daemon.
	I0130 21:39:02.547270  664102 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0130 21:39:02.547284  664102 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0130 21:39:02.547295  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:39:02.547306  664102 command_runner.go:130] > # rdt_config_file = ""
	I0130 21:39:02.547316  664102 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0130 21:39:02.547326  664102 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0130 21:39:02.547340  664102 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0130 21:39:02.547351  664102 command_runner.go:130] > # separate_pull_cgroup = ""
	I0130 21:39:02.547365  664102 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0130 21:39:02.547379  664102 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0130 21:39:02.547389  664102 command_runner.go:130] > # will be added.
	I0130 21:39:02.547398  664102 command_runner.go:130] > # default_capabilities = [
	I0130 21:39:02.547406  664102 command_runner.go:130] > # 	"CHOWN",
	I0130 21:39:02.547416  664102 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0130 21:39:02.547423  664102 command_runner.go:130] > # 	"FSETID",
	I0130 21:39:02.547434  664102 command_runner.go:130] > # 	"FOWNER",
	I0130 21:39:02.547440  664102 command_runner.go:130] > # 	"SETGID",
	I0130 21:39:02.547450  664102 command_runner.go:130] > # 	"SETUID",
	I0130 21:39:02.547459  664102 command_runner.go:130] > # 	"SETPCAP",
	I0130 21:39:02.547470  664102 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0130 21:39:02.547478  664102 command_runner.go:130] > # 	"KILL",
	I0130 21:39:02.547485  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547499  664102 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0130 21:39:02.547511  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:39:02.547522  664102 command_runner.go:130] > # default_sysctls = [
	I0130 21:39:02.547529  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547538  664102 command_runner.go:130] > # List of devices on the host that a
	I0130 21:39:02.547552  664102 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0130 21:39:02.547562  664102 command_runner.go:130] > # allowed_devices = [
	I0130 21:39:02.547569  664102 command_runner.go:130] > # 	"/dev/fuse",
	I0130 21:39:02.547576  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547582  664102 command_runner.go:130] > # List of additional devices. specified as
	I0130 21:39:02.547595  664102 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0130 21:39:02.547607  664102 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0130 21:39:02.547632  664102 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0130 21:39:02.547643  664102 command_runner.go:130] > # additional_devices = [
	I0130 21:39:02.547650  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547660  664102 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0130 21:39:02.547669  664102 command_runner.go:130] > # cdi_spec_dirs = [
	I0130 21:39:02.547677  664102 command_runner.go:130] > # 	"/etc/cdi",
	I0130 21:39:02.547685  664102 command_runner.go:130] > # 	"/var/run/cdi",
	I0130 21:39:02.547690  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547706  664102 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0130 21:39:02.547721  664102 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0130 21:39:02.547731  664102 command_runner.go:130] > # Defaults to false.
	I0130 21:39:02.547741  664102 command_runner.go:130] > # device_ownership_from_security_context = false
	I0130 21:39:02.547755  664102 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0130 21:39:02.547765  664102 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0130 21:39:02.547769  664102 command_runner.go:130] > # hooks_dir = [
	I0130 21:39:02.547778  664102 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0130 21:39:02.547787  664102 command_runner.go:130] > # ]
	I0130 21:39:02.547798  664102 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0130 21:39:02.547812  664102 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0130 21:39:02.547824  664102 command_runner.go:130] > # its default mounts from the following two files:
	I0130 21:39:02.547833  664102 command_runner.go:130] > #
	I0130 21:39:02.547846  664102 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0130 21:39:02.547858  664102 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0130 21:39:02.547868  664102 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0130 21:39:02.547876  664102 command_runner.go:130] > #
	I0130 21:39:02.547897  664102 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0130 21:39:02.547912  664102 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0130 21:39:02.547928  664102 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0130 21:39:02.547939  664102 command_runner.go:130] > #      only add mounts it finds in this file.
	I0130 21:39:02.547948  664102 command_runner.go:130] > #
	I0130 21:39:02.547954  664102 command_runner.go:130] > # default_mounts_file = ""
	I0130 21:39:02.547962  664102 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0130 21:39:02.547974  664102 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0130 21:39:02.547984  664102 command_runner.go:130] > pids_limit = 1024
	I0130 21:39:02.547996  664102 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0130 21:39:02.548009  664102 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0130 21:39:02.548020  664102 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0130 21:39:02.548037  664102 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0130 21:39:02.548046  664102 command_runner.go:130] > # log_size_max = -1
	I0130 21:39:02.548056  664102 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0130 21:39:02.548081  664102 command_runner.go:130] > # log_to_journald = false
	I0130 21:39:02.548095  664102 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0130 21:39:02.548107  664102 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0130 21:39:02.548118  664102 command_runner.go:130] > # Path to directory for container attach sockets.
	I0130 21:39:02.548131  664102 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0130 21:39:02.548141  664102 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0130 21:39:02.548148  664102 command_runner.go:130] > # bind_mount_prefix = ""
	I0130 21:39:02.548158  664102 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0130 21:39:02.548168  664102 command_runner.go:130] > # read_only = false
	I0130 21:39:02.548180  664102 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0130 21:39:02.548194  664102 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0130 21:39:02.548204  664102 command_runner.go:130] > # live configuration reload.
	I0130 21:39:02.548214  664102 command_runner.go:130] > # log_level = "info"
	I0130 21:39:02.548227  664102 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0130 21:39:02.548241  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:39:02.548252  664102 command_runner.go:130] > # log_filter = ""
	I0130 21:39:02.548266  664102 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0130 21:39:02.548280  664102 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0130 21:39:02.548290  664102 command_runner.go:130] > # separated by comma.
	I0130 21:39:02.548299  664102 command_runner.go:130] > # uid_mappings = ""
	I0130 21:39:02.548312  664102 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0130 21:39:02.548321  664102 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0130 21:39:02.548331  664102 command_runner.go:130] > # separated by comma.
	I0130 21:39:02.548341  664102 command_runner.go:130] > # gid_mappings = ""
	I0130 21:39:02.548353  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0130 21:39:02.548367  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:39:02.548380  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:39:02.548391  664102 command_runner.go:130] > # minimum_mappable_uid = -1
	I0130 21:39:02.548403  664102 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0130 21:39:02.548413  664102 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0130 21:39:02.548423  664102 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0130 21:39:02.548432  664102 command_runner.go:130] > # minimum_mappable_gid = -1
	I0130 21:39:02.548446  664102 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0130 21:39:02.548460  664102 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0130 21:39:02.548473  664102 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0130 21:39:02.548484  664102 command_runner.go:130] > # ctr_stop_timeout = 30
	I0130 21:39:02.548497  664102 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0130 21:39:02.548509  664102 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0130 21:39:02.548521  664102 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0130 21:39:02.548532  664102 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0130 21:39:02.548545  664102 command_runner.go:130] > drop_infra_ctr = false
	I0130 21:39:02.548556  664102 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0130 21:39:02.548569  664102 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0130 21:39:02.548584  664102 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0130 21:39:02.548595  664102 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0130 21:39:02.548607  664102 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0130 21:39:02.548616  664102 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0130 21:39:02.548621  664102 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0130 21:39:02.548636  664102 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0130 21:39:02.548648  664102 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0130 21:39:02.548660  664102 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0130 21:39:02.548674  664102 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0130 21:39:02.548687  664102 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0130 21:39:02.548698  664102 command_runner.go:130] > # default_runtime = "runc"
	I0130 21:39:02.548708  664102 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0130 21:39:02.548719  664102 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0130 21:39:02.548737  664102 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0130 21:39:02.548749  664102 command_runner.go:130] > # creation as a file is not desired either.
	I0130 21:39:02.548767  664102 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0130 21:39:02.548780  664102 command_runner.go:130] > # the hostname is being managed dynamically.
	I0130 21:39:02.548786  664102 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0130 21:39:02.548791  664102 command_runner.go:130] > # ]
	I0130 21:39:02.548801  664102 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0130 21:39:02.548825  664102 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0130 21:39:02.548837  664102 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0130 21:39:02.548849  664102 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0130 21:39:02.548853  664102 command_runner.go:130] > #
	I0130 21:39:02.548861  664102 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0130 21:39:02.548870  664102 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0130 21:39:02.548878  664102 command_runner.go:130] > #  runtime_type = "oci"
	I0130 21:39:02.548894  664102 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0130 21:39:02.548905  664102 command_runner.go:130] > #  privileged_without_host_devices = false
	I0130 21:39:02.548916  664102 command_runner.go:130] > #  allowed_annotations = []
	I0130 21:39:02.548923  664102 command_runner.go:130] > # Where:
	I0130 21:39:02.548932  664102 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0130 21:39:02.548943  664102 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0130 21:39:02.548959  664102 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0130 21:39:02.548974  664102 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0130 21:39:02.548983  664102 command_runner.go:130] > #   in $PATH.
	I0130 21:39:02.548995  664102 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0130 21:39:02.549007  664102 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0130 21:39:02.549020  664102 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0130 21:39:02.549027  664102 command_runner.go:130] > #   state.
	I0130 21:39:02.549035  664102 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0130 21:39:02.549049  664102 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0130 21:39:02.549063  664102 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0130 21:39:02.549077  664102 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0130 21:39:02.549091  664102 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0130 21:39:02.549105  664102 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0130 21:39:02.549117  664102 command_runner.go:130] > #   The currently recognized values are:
	I0130 21:39:02.549128  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0130 21:39:02.549141  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0130 21:39:02.549154  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0130 21:39:02.549170  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0130 21:39:02.549186  664102 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0130 21:39:02.549201  664102 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0130 21:39:02.549213  664102 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0130 21:39:02.549223  664102 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0130 21:39:02.549235  664102 command_runner.go:130] > #   should be moved to the container's cgroup
	I0130 21:39:02.549246  664102 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0130 21:39:02.549258  664102 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0130 21:39:02.549268  664102 command_runner.go:130] > runtime_type = "oci"
	I0130 21:39:02.549278  664102 command_runner.go:130] > runtime_root = "/run/runc"
	I0130 21:39:02.549288  664102 command_runner.go:130] > runtime_config_path = ""
	I0130 21:39:02.549299  664102 command_runner.go:130] > monitor_path = ""
	I0130 21:39:02.549309  664102 command_runner.go:130] > monitor_cgroup = ""
	I0130 21:39:02.549316  664102 command_runner.go:130] > monitor_exec_cgroup = ""
	I0130 21:39:02.549325  664102 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0130 21:39:02.549335  664102 command_runner.go:130] > # running containers
	I0130 21:39:02.549346  664102 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0130 21:39:02.549361  664102 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0130 21:39:02.549396  664102 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0130 21:39:02.549410  664102 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0130 21:39:02.549418  664102 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0130 21:39:02.549429  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0130 21:39:02.549440  664102 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0130 21:39:02.549451  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0130 21:39:02.549462  664102 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0130 21:39:02.549483  664102 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0130 21:39:02.549498  664102 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0130 21:39:02.549511  664102 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0130 21:39:02.549525  664102 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0130 21:39:02.549541  664102 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0130 21:39:02.549557  664102 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0130 21:39:02.549571  664102 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0130 21:39:02.549590  664102 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0130 21:39:02.549607  664102 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0130 21:39:02.549621  664102 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0130 21:39:02.549634  664102 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0130 21:39:02.549641  664102 command_runner.go:130] > # Example:
	I0130 21:39:02.549649  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0130 21:39:02.549661  664102 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0130 21:39:02.549673  664102 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0130 21:39:02.549685  664102 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0130 21:39:02.549695  664102 command_runner.go:130] > # cpuset = 0
	I0130 21:39:02.549705  664102 command_runner.go:130] > # cpushares = "0-1"
	I0130 21:39:02.549715  664102 command_runner.go:130] > # Where:
	I0130 21:39:02.549723  664102 command_runner.go:130] > # The workload name is workload-type.
	I0130 21:39:02.549736  664102 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0130 21:39:02.549749  664102 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0130 21:39:02.549762  664102 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0130 21:39:02.549778  664102 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0130 21:39:02.549791  664102 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0130 21:39:02.549800  664102 command_runner.go:130] > # 
	I0130 21:39:02.549810  664102 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0130 21:39:02.549814  664102 command_runner.go:130] > #
	I0130 21:39:02.549823  664102 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0130 21:39:02.549838  664102 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0130 21:39:02.549860  664102 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0130 21:39:02.549874  664102 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0130 21:39:02.549887  664102 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0130 21:39:02.549900  664102 command_runner.go:130] > [crio.image]
	I0130 21:39:02.549909  664102 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0130 21:39:02.549916  664102 command_runner.go:130] > # default_transport = "docker://"
	I0130 21:39:02.549936  664102 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0130 21:39:02.549950  664102 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:39:02.549961  664102 command_runner.go:130] > # global_auth_file = ""
	I0130 21:39:02.549972  664102 command_runner.go:130] > # The image used to instantiate infra containers.
	I0130 21:39:02.549985  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:39:02.549996  664102 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0130 21:39:02.550010  664102 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0130 21:39:02.550020  664102 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0130 21:39:02.550028  664102 command_runner.go:130] > # This option supports live configuration reload.
	I0130 21:39:02.550039  664102 command_runner.go:130] > # pause_image_auth_file = ""
	I0130 21:39:02.550050  664102 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0130 21:39:02.550064  664102 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0130 21:39:02.550077  664102 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0130 21:39:02.550090  664102 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0130 21:39:02.550101  664102 command_runner.go:130] > # pause_command = "/pause"
	I0130 21:39:02.550111  664102 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0130 21:39:02.550123  664102 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0130 21:39:02.550138  664102 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0130 21:39:02.550151  664102 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0130 21:39:02.550164  664102 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0130 21:39:02.550174  664102 command_runner.go:130] > # signature_policy = ""
	I0130 21:39:02.550188  664102 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0130 21:39:02.550198  664102 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0130 21:39:02.550205  664102 command_runner.go:130] > # changing them here.
	I0130 21:39:02.550213  664102 command_runner.go:130] > # insecure_registries = [
	I0130 21:39:02.550223  664102 command_runner.go:130] > # ]
	I0130 21:39:02.550235  664102 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0130 21:39:02.550247  664102 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0130 21:39:02.550258  664102 command_runner.go:130] > # image_volumes = "mkdir"
	I0130 21:39:02.550270  664102 command_runner.go:130] > # Temporary directory to use for storing big files
	I0130 21:39:02.550282  664102 command_runner.go:130] > # big_files_temporary_dir = ""
	I0130 21:39:02.550293  664102 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0130 21:39:02.550299  664102 command_runner.go:130] > # CNI plugins.
	I0130 21:39:02.550306  664102 command_runner.go:130] > [crio.network]
	I0130 21:39:02.550320  664102 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0130 21:39:02.550332  664102 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0130 21:39:02.550343  664102 command_runner.go:130] > # cni_default_network = ""
	I0130 21:39:02.550356  664102 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0130 21:39:02.550367  664102 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0130 21:39:02.550378  664102 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0130 21:39:02.550384  664102 command_runner.go:130] > # plugin_dirs = [
	I0130 21:39:02.550390  664102 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0130 21:39:02.550399  664102 command_runner.go:130] > # ]
	I0130 21:39:02.550412  664102 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0130 21:39:02.550423  664102 command_runner.go:130] > [crio.metrics]
	I0130 21:39:02.550435  664102 command_runner.go:130] > # Globally enable or disable metrics support.
	I0130 21:39:02.550445  664102 command_runner.go:130] > enable_metrics = true
	I0130 21:39:02.550456  664102 command_runner.go:130] > # Specify enabled metrics collectors.
	I0130 21:39:02.550467  664102 command_runner.go:130] > # Per default all metrics are enabled.
	I0130 21:39:02.550478  664102 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0130 21:39:02.550489  664102 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0130 21:39:02.550501  664102 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0130 21:39:02.550511  664102 command_runner.go:130] > # metrics_collectors = [
	I0130 21:39:02.550520  664102 command_runner.go:130] > # 	"operations",
	I0130 21:39:02.550527  664102 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0130 21:39:02.550539  664102 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0130 21:39:02.550554  664102 command_runner.go:130] > # 	"operations_errors",
	I0130 21:39:02.550562  664102 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0130 21:39:02.550569  664102 command_runner.go:130] > # 	"image_pulls_by_name",
	I0130 21:39:02.550577  664102 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0130 21:39:02.550584  664102 command_runner.go:130] > # 	"image_pulls_failures",
	I0130 21:39:02.550593  664102 command_runner.go:130] > # 	"image_pulls_successes",
	I0130 21:39:02.550606  664102 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0130 21:39:02.550612  664102 command_runner.go:130] > # 	"image_layer_reuse",
	I0130 21:39:02.550616  664102 command_runner.go:130] > # 	"containers_oom_total",
	I0130 21:39:02.550620  664102 command_runner.go:130] > # 	"containers_oom",
	I0130 21:39:02.550625  664102 command_runner.go:130] > # 	"processes_defunct",
	I0130 21:39:02.550629  664102 command_runner.go:130] > # 	"operations_total",
	I0130 21:39:02.550636  664102 command_runner.go:130] > # 	"operations_latency_seconds",
	I0130 21:39:02.550640  664102 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0130 21:39:02.550645  664102 command_runner.go:130] > # 	"operations_errors_total",
	I0130 21:39:02.550649  664102 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0130 21:39:02.550655  664102 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0130 21:39:02.550660  664102 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0130 21:39:02.550664  664102 command_runner.go:130] > # 	"image_pulls_success_total",
	I0130 21:39:02.550668  664102 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0130 21:39:02.550674  664102 command_runner.go:130] > # 	"containers_oom_count_total",
	I0130 21:39:02.550678  664102 command_runner.go:130] > # ]
	I0130 21:39:02.550684  664102 command_runner.go:130] > # The port on which the metrics server will listen.
	I0130 21:39:02.550688  664102 command_runner.go:130] > # metrics_port = 9090
	I0130 21:39:02.550695  664102 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0130 21:39:02.550702  664102 command_runner.go:130] > # metrics_socket = ""
	I0130 21:39:02.550711  664102 command_runner.go:130] > # The certificate for the secure metrics server.
	I0130 21:39:02.550721  664102 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0130 21:39:02.550731  664102 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0130 21:39:02.550741  664102 command_runner.go:130] > # certificate on any modification event.
	I0130 21:39:02.550747  664102 command_runner.go:130] > # metrics_cert = ""
	I0130 21:39:02.550756  664102 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0130 21:39:02.550764  664102 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0130 21:39:02.550773  664102 command_runner.go:130] > # metrics_key = ""
	I0130 21:39:02.550782  664102 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0130 21:39:02.550792  664102 command_runner.go:130] > [crio.tracing]
	I0130 21:39:02.550802  664102 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0130 21:39:02.550810  664102 command_runner.go:130] > # enable_tracing = false
	I0130 21:39:02.550820  664102 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0130 21:39:02.550832  664102 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0130 21:39:02.550841  664102 command_runner.go:130] > # Number of samples to collect per million spans.
	I0130 21:39:02.550852  664102 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0130 21:39:02.550864  664102 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0130 21:39:02.550869  664102 command_runner.go:130] > [crio.stats]
	I0130 21:39:02.550875  664102 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0130 21:39:02.550883  664102 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0130 21:39:02.550889  664102 command_runner.go:130] > # stats_collection_period = 0
	I0130 21:39:02.550921  664102 command_runner.go:130] ! time="2024-01-30 21:39:02.534752855Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0130 21:39:02.550935  664102 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0130 21:39:02.550993  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:39:02.551002  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:39:02.551010  664102 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 21:39:02.551029  664102 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.218 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-721181 NodeName:multinode-721181-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.174"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.218 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 21:39:02.551142  664102 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.218
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-721181-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.218
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.174"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 21:39:02.551193  664102 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-721181-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.218
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 21:39:02.551241  664102 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 21:39:02.560725  664102 command_runner.go:130] > kubeadm
	I0130 21:39:02.560741  664102 command_runner.go:130] > kubectl
	I0130 21:39:02.560744  664102 command_runner.go:130] > kubelet
	I0130 21:39:02.561331  664102 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 21:39:02.561376  664102 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0130 21:39:02.569936  664102 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0130 21:39:02.585720  664102 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 21:39:02.601955  664102 ssh_runner.go:195] Run: grep 192.168.39.174	control-plane.minikube.internal$ /etc/hosts
	I0130 21:39:02.606061  664102 command_runner.go:130] > 192.168.39.174	control-plane.minikube.internal
	I0130 21:39:02.606124  664102 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:39:02.606476  664102 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:39:02.606524  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:39:02.606571  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:39:02.621306  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41947
	I0130 21:39:02.621770  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:39:02.622237  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:39:02.622260  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:39:02.622618  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:39:02.622812  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:39:02.622987  664102 start.go:304] JoinCluster: &{Name:multinode-721181 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.4 ClusterName:multinode-721181 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.174 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.69 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:39:02.623093  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0130 21:39:02.623110  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:39:02.625915  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:39:02.626375  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:39:02.626401  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:39:02.626551  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:39:02.626774  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:39:02.626944  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:39:02.627090  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:39:02.801878  664102 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xgt9rk.0cqjv5zx0ixedyvn --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 21:39:02.802134  664102 start.go:317] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 21:39:02.802189  664102 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:39:02.802636  664102 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:39:02.802698  664102 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:39:02.817231  664102 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32817
	I0130 21:39:02.817699  664102 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:39:02.818121  664102 main.go:141] libmachine: Using API Version  1
	I0130 21:39:02.818147  664102 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:39:02.818504  664102 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:39:02.818721  664102 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:39:02.818922  664102 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-721181-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0130 21:39:02.818946  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:39:02.821853  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:39:02.822293  664102 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:34:55 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:39:02.822322  664102 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:39:02.822483  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:39:02.822653  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:39:02.822799  664102 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:39:02.822927  664102 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:39:03.035506  664102 command_runner.go:130] > node/multinode-721181-m03 cordoned
	I0130 21:39:06.076655  664102 command_runner.go:130] > pod "busybox-5b5d89c9d6-rgkc4" has DeletionTimestamp older than 1 seconds, skipping
	I0130 21:39:06.076689  664102 command_runner.go:130] > node/multinode-721181-m03 drained
	I0130 21:39:06.078351  664102 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0130 21:39:06.078375  664102 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-qxwqk, kube-system/kube-proxy-lwg96
	I0130 21:39:06.078398  664102 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl drain multinode-721181-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.259452163s)
	I0130 21:39:06.078411  664102 node.go:108] successfully drained node "m03"
	I0130 21:39:06.078798  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:39:06.079086  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:39:06.079470  664102 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0130 21:39:06.079523  664102 round_trippers.go:463] DELETE https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:39:06.079531  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:06.079539  664102 round_trippers.go:473]     Content-Type: application/json
	I0130 21:39:06.079545  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:06.079553  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:06.090953  664102 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0130 21:39:06.090976  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:06.090986  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:06.090995  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:06.091001  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:06.091008  664102 round_trippers.go:580]     Content-Length: 171
	I0130 21:39:06.091015  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:06 GMT
	I0130 21:39:06.091023  664102 round_trippers.go:580]     Audit-Id: a9458172-1c73-4e5d-97a8-2d6f926dc870
	I0130 21:39:06.091032  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:06.091068  664102 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-721181-m03","kind":"nodes","uid":"f8b13ad8-e768-466a-b155-3ab55af16d96"}}
	I0130 21:39:06.091107  664102 node.go:124] successfully deleted node "m03"
	I0130 21:39:06.091122  664102 start.go:321] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 21:39:06.091147  664102 start.go:325] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 21:39:06.091171  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xgt9rk.0cqjv5zx0ixedyvn --discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-721181-m03"
	I0130 21:39:06.156574  664102 command_runner.go:130] > [preflight] Running pre-flight checks
	I0130 21:39:06.321322  664102 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0130 21:39:06.321356  664102 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0130 21:39:06.384983  664102 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 21:39:06.385605  664102 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 21:39:06.385629  664102 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0130 21:39:06.523590  664102 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0130 21:39:07.042400  664102 command_runner.go:130] > This node has joined the cluster:
	I0130 21:39:07.042427  664102 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0130 21:39:07.042433  664102 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0130 21:39:07.042440  664102 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0130 21:39:07.045032  664102 command_runner.go:130] ! W0130 21:39:06.148442    2507 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0130 21:39:07.045052  664102 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0130 21:39:07.045064  664102 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0130 21:39:07.045077  664102 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0130 21:39:07.045738  664102 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0130 21:39:07.343348  664102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=multinode-721181 minikube.k8s.io/updated_at=2024_01_30T21_39_07_0700 minikube.k8s.io/primary=false "-l minikube.k8s.io/primary!=true" --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 21:39:07.438633  664102 command_runner.go:130] > node/multinode-721181-m02 labeled
	I0130 21:39:07.450714  664102 command_runner.go:130] > node/multinode-721181-m03 labeled
	I0130 21:39:07.452890  664102 start.go:306] JoinCluster complete in 4.829902126s
	I0130 21:39:07.452911  664102 cni.go:84] Creating CNI manager for ""
	I0130 21:39:07.452918  664102 cni.go:136] 3 nodes found, recommending kindnet
	I0130 21:39:07.452983  664102 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0130 21:39:07.458911  664102 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0130 21:39:07.458951  664102 command_runner.go:130] >   Size: 2685752   	Blocks: 5248       IO Block: 4096   regular file
	I0130 21:39:07.458963  664102 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0130 21:39:07.458972  664102 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0130 21:39:07.458991  664102 command_runner.go:130] > Access: 2024-01-30 21:34:55.719579323 +0000
	I0130 21:39:07.459003  664102 command_runner.go:130] > Modify: 2023-12-28 22:53:36.000000000 +0000
	I0130 21:39:07.459013  664102 command_runner.go:130] > Change: 2024-01-30 21:34:53.860579323 +0000
	I0130 21:39:07.459022  664102 command_runner.go:130] >  Birth: -
	I0130 21:39:07.459088  664102 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0130 21:39:07.459105  664102 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0130 21:39:07.480890  664102 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0130 21:39:07.827636  664102 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:39:07.832830  664102 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0130 21:39:07.835616  664102 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0130 21:39:07.849996  664102 command_runner.go:130] > daemonset.apps/kindnet configured
	I0130 21:39:07.852674  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:39:07.852884  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:39:07.853224  664102 round_trippers.go:463] GET https://192.168.39.174:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0130 21:39:07.853235  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.853244  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.853254  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.856877  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:39:07.856897  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.856907  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.856916  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.856926  664102 round_trippers.go:580]     Content-Length: 291
	I0130 21:39:07.856939  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.856951  664102 round_trippers.go:580]     Audit-Id: 3f83af94-3895-4f19-994a-7109b94d3ba4
	I0130 21:39:07.856962  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.856971  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.857082  664102 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"f33652aa-ee2d-484a-8c79-9724e39fcaab","resourceVersion":"864","creationTimestamp":"2024-01-30T21:24:57Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0130 21:39:07.857187  664102 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-721181" context rescaled to 1 replicas
	I0130 21:39:07.857223  664102 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.218 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}
	I0130 21:39:07.859161  664102 out.go:177] * Verifying Kubernetes components...
	I0130 21:39:07.860602  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:39:07.876747  664102 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:39:07.877012  664102 kapi.go:59] client config for multinode-721181: &rest.Config{Host:"https://192.168.39.174:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.crt", KeyFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/profiles/multinode-721181/client.key", CAFile:"/home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c281c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0130 21:39:07.877292  664102 node_ready.go:35] waiting up to 6m0s for node "multinode-721181-m03" to be "Ready" ...
	I0130 21:39:07.877390  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:39:07.877399  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.877407  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.877412  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.880800  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:39:07.880843  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.880853  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.880862  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.880870  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.880878  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.880886  664102 round_trippers.go:580]     Audit-Id: 6ab33446-e0e5-4160-be66-b233a12e6d17
	I0130 21:39:07.880898  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.881769  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m03","uid":"a2529e94-d46a-4c81-94f1-39c6ec6175a5","resourceVersion":"1203","creationTimestamp":"2024-01-30T21:39:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_39_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:39:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0130 21:39:07.882047  664102 node_ready.go:49] node "multinode-721181-m03" has status "Ready":"True"
	I0130 21:39:07.882065  664102 node_ready.go:38] duration metric: took 4.754481ms waiting for node "multinode-721181-m03" to be "Ready" ...
	I0130 21:39:07.882107  664102 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:39:07.882177  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods
	I0130 21:39:07.882189  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.882200  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.882210  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.890722  664102 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0130 21:39:07.890744  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.890754  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.890762  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.890771  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.890783  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.890792  664102 round_trippers.go:580]     Audit-Id: 05db8a06-b6b9-4bd3-ba9b-44c1fcd53004
	I0130 21:39:07.890803  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.893768  664102 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1207"},"items":[{"metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81923 chars]
	I0130 21:39:07.896146  664102 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.896230  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-2jstl
	I0130 21:39:07.896241  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.896251  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.896261  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.898346  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:07.898364  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.898373  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.898381  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.898388  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.898398  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.898409  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.898417  664102 round_trippers.go:580]     Audit-Id: 0bedf791-5938-4404-9fd8-da76219f6f72
	I0130 21:39:07.898599  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-2jstl","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2","resourceVersion":"860","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"71253029-f9dc-497e-a1f9-46c57041d0de","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"71253029-f9dc-497e-a1f9-46c57041d0de\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0130 21:39:07.899044  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:07.899061  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.899071  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.899079  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.900853  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.900873  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.900883  664102 round_trippers.go:580]     Audit-Id: c9e63cd4-1501-45d1-bdab-6f18a88f08cb
	I0130 21:39:07.900891  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.900898  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.900908  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.900923  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.900931  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.901145  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:07.901411  664102 pod_ready.go:92] pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:07.901426  664102 pod_ready.go:81] duration metric: took 5.258234ms waiting for pod "coredns-5dd5756b68-2jstl" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.901436  664102 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.901511  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-721181
	I0130 21:39:07.901521  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.901532  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.901541  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.904325  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:07.904346  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.904356  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.904364  664102 round_trippers.go:580]     Audit-Id: c2f04516-eaca-4d35-90b5-75442ad720c1
	I0130 21:39:07.904373  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.904389  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.904397  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.904408  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.904518  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-721181","namespace":"kube-system","uid":"83f20d3f-5604-4e3c-a7c8-b38a9b20c035","resourceVersion":"838","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.174:2379","kubernetes.io/config.hash":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.mirror":"200d5f8761e2576886603d5ccbacc15d","kubernetes.io/config.seen":"2024-01-30T21:24:57.236042745Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0130 21:39:07.904937  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:07.904954  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.904964  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.904989  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.906793  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.906814  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.906822  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.906836  664102 round_trippers.go:580]     Audit-Id: b96ce766-10f3-443a-9c4a-3eaa06b0d72b
	I0130 21:39:07.906845  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.906853  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.906866  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.906876  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.907190  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:07.907592  664102 pod_ready.go:92] pod "etcd-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:07.907611  664102 pod_ready.go:81] duration metric: took 6.168475ms waiting for pod "etcd-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.907631  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.907690  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-721181
	I0130 21:39:07.907702  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.907714  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.907726  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.909550  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.909569  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.909579  664102 round_trippers.go:580]     Audit-Id: fb141444-3b9a-4165-8a35-f6ce80243a66
	I0130 21:39:07.909588  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.909597  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.909606  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.909617  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.909625  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.909781  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-721181","namespace":"kube-system","uid":"fbcc53e1-4691-4473-b215-2cb6daeaf321","resourceVersion":"850","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.174:8443","kubernetes.io/config.hash":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.mirror":"f12df51f0cf7fec96f3664a9ee0f4186","kubernetes.io/config.seen":"2024-01-30T21:24:57.236043778Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0130 21:39:07.910207  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:07.910222  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.910232  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.910242  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.912054  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.912070  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.912080  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.912089  664102 round_trippers.go:580]     Audit-Id: 4d490199-68e2-45ab-915e-ac327aedda34
	I0130 21:39:07.912098  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.912106  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.912118  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.912126  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.912495  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:07.912798  664102 pod_ready.go:92] pod "kube-apiserver-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:07.912815  664102 pod_ready.go:81] duration metric: took 5.172309ms waiting for pod "kube-apiserver-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.912826  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.912871  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-721181
	I0130 21:39:07.912881  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.912891  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.912902  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.914540  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.914553  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.914559  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.914564  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.914569  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.914576  664102 round_trippers.go:580]     Audit-Id: 4ba62431-e0c4-4023-a8ff-c35fd5a488c2
	I0130 21:39:07.914584  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.914595  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.914692  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-721181","namespace":"kube-system","uid":"de8beec4-5cad-4405-b856-7475b95559ba","resourceVersion":"837","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.mirror":"1e3639010d361b30109ef6c46b132307","kubernetes.io/config.seen":"2024-01-30T21:24:57.236037857Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0130 21:39:07.914994  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:07.915007  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:07.915017  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:07.915026  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:07.916610  664102 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0130 21:39:07.916628  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:07.916636  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:07 GMT
	I0130 21:39:07.916645  664102 round_trippers.go:580]     Audit-Id: 67b8147d-040f-44ab-a104-974dcc19842a
	I0130 21:39:07.916653  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:07.916660  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:07.916669  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:07.916681  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:07.916844  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:07.917205  664102 pod_ready.go:92] pod "kube-controller-manager-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:07.917225  664102 pod_ready.go:81] duration metric: took 4.391762ms waiting for pod "kube-controller-manager-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:07.917238  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:08.077604  664102 request.go:629] Waited for 160.290308ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:39:08.077663  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-49rq4
	I0130 21:39:08.077669  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:08.077677  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:08.077685  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:08.080357  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:08.080381  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:08.080389  664102 round_trippers.go:580]     Audit-Id: b74e9038-fb66-4c2f-ae87-34370390ad6b
	I0130 21:39:08.080397  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:08.080404  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:08.080413  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:08.080424  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:08.080434  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:08 GMT
	I0130 21:39:08.080661  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-49rq4","generateName":"kube-proxy-","namespace":"kube-system","uid":"63c8c4a9-2d5e-4aca-b3f4-b239c2adcfa3","resourceVersion":"812","creationTimestamp":"2024-01-30T21:25:10Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:25:10Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0130 21:39:08.277533  664102 request.go:629] Waited for 196.309233ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:08.277619  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:08.277627  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:08.277639  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:08.277650  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:08.280265  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:08.280294  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:08.280306  664102 round_trippers.go:580]     Audit-Id: dff9feed-8e09-4b0e-a99e-7dc9e2ed9db3
	I0130 21:39:08.280315  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:08.280323  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:08.280331  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:08.280339  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:08.280348  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:08 GMT
	I0130 21:39:08.280509  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:08.280935  664102 pod_ready.go:92] pod "kube-proxy-49rq4" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:08.280956  664102 pod_ready.go:81] duration metric: took 363.707175ms waiting for pod "kube-proxy-49rq4" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:08.280965  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:08.477807  664102 request.go:629] Waited for 196.772482ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:39:08.477878  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-lwg96
	I0130 21:39:08.477883  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:08.477892  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:08.477898  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:08.480520  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:08.480550  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:08.480561  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:08.480571  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:08.480580  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:08 GMT
	I0130 21:39:08.480591  664102 round_trippers.go:580]     Audit-Id: 1e3d191f-4020-4098-bed9-9fdf737bb3d1
	I0130 21:39:08.480600  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:08.480615  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:08.480742  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-lwg96","generateName":"kube-proxy-","namespace":"kube-system","uid":"68cc319c-45c4-4a65-9712-d4e419acd7d6","resourceVersion":"1177","creationTimestamp":"2024-01-30T21:26:51Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:51Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0130 21:39:08.677563  664102 request.go:629] Waited for 196.248357ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:39:08.677646  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m03
	I0130 21:39:08.677651  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:08.677659  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:08.677668  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:08.680940  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:39:08.680967  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:08.680978  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:08.680987  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:08.680995  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:08.681004  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:08.681013  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:08 GMT
	I0130 21:39:08.681025  664102 round_trippers.go:580]     Audit-Id: b19e088e-14bd-4512-b022-f6155cef0948
	I0130 21:39:08.681156  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m03","uid":"a2529e94-d46a-4c81-94f1-39c6ec6175a5","resourceVersion":"1203","creationTimestamp":"2024-01-30T21:39:06Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_39_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:39:06Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3994 chars]
	I0130 21:39:08.681529  664102 pod_ready.go:92] pod "kube-proxy-lwg96" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:08.681562  664102 pod_ready.go:81] duration metric: took 400.581688ms waiting for pod "kube-proxy-lwg96" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:08.681573  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:08.877832  664102 request.go:629] Waited for 196.166712ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:39:08.877931  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s9pwd
	I0130 21:39:08.877943  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:08.877952  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:08.877964  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:08.880947  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:08.880965  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:08.880977  664102 round_trippers.go:580]     Audit-Id: 82cf0e98-876f-4c07-9e80-3e30684163f0
	I0130 21:39:08.880986  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:08.880996  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:08.881012  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:08.881021  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:08.881033  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:08 GMT
	I0130 21:39:08.881240  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-s9pwd","generateName":"kube-proxy-","namespace":"kube-system","uid":"e6594579-7b2f-4ab5-b7f2-0b176bad1705","resourceVersion":"1032","creationTimestamp":"2024-01-30T21:26:05Z","labels":{"controller-revision-hash":"8486c7d9cd","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4d3aec1d-3bc1-468d-802c-b867bff0bf7b","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:26:05Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4d3aec1d-3bc1-468d-802c-b867bff0bf7b\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5727 chars]
	I0130 21:39:09.078121  664102 request.go:629] Waited for 196.374508ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:39:09.078215  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181-m02
	I0130 21:39:09.078227  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:09.078241  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:09.078254  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:09.081602  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:39:09.081623  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:09.081630  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:09.081636  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:09 GMT
	I0130 21:39:09.081641  664102 round_trippers.go:580]     Audit-Id: 14920d38-c7e1-4762-81e0-c86c7dc0d25e
	I0130 21:39:09.081646  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:09.081651  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:09.081656  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:09.081807  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181-m02","uid":"be090718-b5cd-4d45-9ba2-6425fd24503e","resourceVersion":"1202","creationTimestamp":"2024-01-30T21:37:26Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_01_30T21_39_07_0700","minikube.k8s.io/version":"v1.32.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:37:26Z","fieldsType":"FieldsV1","fieldsV1":{"f:meta
data":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec": [truncated 3993 chars]
	I0130 21:39:09.082208  664102 pod_ready.go:92] pod "kube-proxy-s9pwd" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:09.082235  664102 pod_ready.go:81] duration metric: took 400.64344ms waiting for pod "kube-proxy-s9pwd" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:09.082249  664102 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:09.277610  664102 request.go:629] Waited for 195.27839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:39:09.277690  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-721181
	I0130 21:39:09.277695  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:09.277703  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:09.277709  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:09.280379  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:09.280405  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:09.280415  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:09.280423  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:09.280431  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:09.280440  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:09.280448  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:09 GMT
	I0130 21:39:09.280458  664102 round_trippers.go:580]     Audit-Id: 6d1bd67e-b246-4784-8d51-9b2f8e19f81a
	I0130 21:39:09.280603  664102 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-721181","namespace":"kube-system","uid":"d7e4675b-0e8c-46de-9b39-435d25004a88","resourceVersion":"852","creationTimestamp":"2024-01-30T21:24:57Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"48930a2236670664c600a427fcb648de","kubernetes.io/config.mirror":"48930a2236670664c600a427fcb648de","kubernetes.io/config.seen":"2024-01-30T21:24:57.236041601Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-01-30T21:24:57Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0130 21:39:09.478572  664102 request.go:629] Waited for 197.426429ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:09.478668  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes/multinode-721181
	I0130 21:39:09.478681  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:09.478694  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:09.478707  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:09.481122  664102 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0130 21:39:09.481139  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:09.481148  664102 round_trippers.go:580]     Audit-Id: d8f9c7b3-bdc3-4712-accb-f29a04bef2f7
	I0130 21:39:09.481157  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:09.481165  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:09.481172  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:09.481181  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:09.481190  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:09 GMT
	I0130 21:39:09.481539  664102 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2024-01-30T21:24:54Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0130 21:39:09.482040  664102 pod_ready.go:92] pod "kube-scheduler-multinode-721181" in "kube-system" namespace has status "Ready":"True"
	I0130 21:39:09.482076  664102 pod_ready.go:81] duration metric: took 399.810011ms waiting for pod "kube-scheduler-multinode-721181" in "kube-system" namespace to be "Ready" ...
	I0130 21:39:09.482093  664102 pod_ready.go:38] duration metric: took 1.599970075s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 21:39:09.482120  664102 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 21:39:09.482193  664102 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:39:09.497777  664102 system_svc.go:56] duration metric: took 15.654174ms WaitForService to wait for kubelet.
	I0130 21:39:09.497796  664102 kubeadm.go:581] duration metric: took 1.640545298s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 21:39:09.497814  664102 node_conditions.go:102] verifying NodePressure condition ...
	I0130 21:39:09.678241  664102 request.go:629] Waited for 180.342903ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.174:8443/api/v1/nodes
	I0130 21:39:09.678329  664102 round_trippers.go:463] GET https://192.168.39.174:8443/api/v1/nodes
	I0130 21:39:09.678335  664102 round_trippers.go:469] Request Headers:
	I0130 21:39:09.678344  664102 round_trippers.go:473]     Accept: application/json, */*
	I0130 21:39:09.678353  664102 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0130 21:39:09.681801  664102 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0130 21:39:09.681832  664102 round_trippers.go:577] Response Headers:
	I0130 21:39:09.681844  664102 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 32f9bfae-7f91-42a1-ba67-910d455e1a6b
	I0130 21:39:09.681854  664102 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 164b5ff3-ea12-408e-aa63-329e3bec69c7
	I0130 21:39:09.681863  664102 round_trippers.go:580]     Date: Tue, 30 Jan 2024 21:39:09 GMT
	I0130 21:39:09.681874  664102 round_trippers.go:580]     Audit-Id: 6bf2ac16-28e6-47ed-97b3-aa4ae4a2c555
	I0130 21:39:09.681880  664102 round_trippers.go:580]     Cache-Control: no-cache, private
	I0130 21:39:09.681893  664102 round_trippers.go:580]     Content-Type: application/json
	I0130 21:39:09.682235  664102 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1216"},"items":[{"metadata":{"name":"multinode-721181","uid":"5e42610a-1a68-4534-9d3c-50d75d913c04","resourceVersion":"880","creationTimestamp":"2024-01-30T21:24:54Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-721181","kubernetes.io/os":"linux","minikube.k8s.io/commit":"ee797c3c8a930c6d412d0b471af21f4da96305b5","minikube.k8s.io/name":"multinode-721181","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_01_30T21_24_58_0700","minikube.k8s.io/version":"v1.32.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 16238 chars]
	I0130 21:39:09.683114  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:39:09.683144  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:39:09.683157  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:39:09.683168  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:39:09.683176  664102 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 21:39:09.683183  664102 node_conditions.go:123] node cpu capacity is 2
	I0130 21:39:09.683198  664102 node_conditions.go:105] duration metric: took 185.379274ms to run NodePressure ...
	I0130 21:39:09.683212  664102 start.go:228] waiting for startup goroutines ...
	I0130 21:39:09.683244  664102 start.go:242] writing updated cluster config ...
	I0130 21:39:09.683665  664102 ssh_runner.go:195] Run: rm -f paused
	I0130 21:39:09.734681  664102 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 21:39:09.737753  664102 out.go:177] * Done! kubectl is now configured to use "multinode-721181" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 21:34:54 UTC, ends at Tue 2024-01-30 21:39:10 UTC. --
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.816531110Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706650750816517819,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c0f60141-fb0a-40b0-b554-be0543be68f3 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.817458803Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=952fa991-59a1-49e2-993b-27a5fbfa9eb5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.817524024Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=952fa991-59a1-49e2-993b-27a5fbfa9eb5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.817777853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:730e79c1bb10bf67247a4cbd57c81ece688b4373cc9c1ed61c3e8d6a06b9eace,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706650560571668782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfcf2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55166ff49cef07a4a8210b010ef0a79d17b1834e023b7a8bd4795d5334fb1bec,PodSandboxId:4d65723bc7374d1b29974f5ca907368b5e32a2e8e774f3b1febaf1af371bf299,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706650546997960087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-zdhbw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e4a21d6-a197-4a33-870a-840e3d20436f,},Annotations:map[string]string{io.kubernetes.container.hash: 45d5b794,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ee4def490de224ba91b54224fe8b11febec674751cce2a84e884c93d6c307b,PodSandboxId:4576bc6aa072c3f32b72a3573032d8e5b5d4e18df9fdf4d8eaae75658d0d2f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706650545789213441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2jstl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 330bb1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42aa494e9ae46b3b0113939035c49c45325d0356b16686f30d01cda1eafef4c,PodSandboxId:7da3abfe94d978d81f166c568a6ce93d21cc3612fdb250a8d959a372068bbdb7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706650532689086550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zt7wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 49dc74c8-c0dc-4421-99f2-b40bcf3429ff,},Annotations:map[string]string{io.kubernetes.container.hash: b411c604,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b7dfd980813e40a6df79225505d83739fb11d56f3a3364814f69b8ad62b7d1,PodSandboxId:c8aefe01d9eb795ae65abcf5a17d43ce07f91eb65d2746a9f073bd7ffb111d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706650530374947040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-49rq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63c8c4a9-2d5e-4aca-b3f4-b239c2
adcfa3,},Annotations:map[string]string{io.kubernetes.container.hash: e4e46879,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3194c5077033b9eb54f0c7c20e70767aaa6cf9f7abd36f8fc71b7454b4eee41,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706650530220400997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfc
f2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b47c64886b0fa2bae9e5b6e67c3f8a4242f63c57da5914ec03816eb79c61c6,PodSandboxId:ad0566cab72540d02eb73ce6b4e2f890c876cffd2e5a30df791496c97bff1703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706650523999464233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48930a2236670664c600a427fcb648de,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e05f2512a78a4b523bbd92de0f9eb1d058550b3729b7dcd6647f651c9bd4bb,PodSandboxId:08646a592cb3ea016d6bbef047c55cd7e8cb49e1d2bc5f02897e169040bac216,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706650523763765409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 200d5f8761e2576886603d5ccbacc15d,},Annotations:map[string]string{io.kubernetes.container.has
h: 5aaaf365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17938d324bae39ab1be9550e25fdcc9a6843032ff0a51a0eef3eaaead23196f6,PodSandboxId:1dd2c3d821cb6888a45e1b75852b4aef2e432bab44af2a264ed79b80ccf4922c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706650523454680032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12df51f0cf7fec96f3664a9ee0f4186,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc298a9b9e362aff4646aa9e27bc0ec96d09b5ac407603d8cd2b3451ce20f517,PodSandboxId:5f6c028ada9164fc0986fe33b8c660fac233611ad13a2c7d30400c9421df27a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706650523374959577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3639010d361b30109ef6c46b132307,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=952fa991-59a1-49e2-993b-27a5fbfa9eb5 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.857431662Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=75d560e9-b98f-4d83-a77b-375f428ff530 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.857506109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=75d560e9-b98f-4d83-a77b-375f428ff530 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.858815964Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=dbe65425-85c3-4edd-b45c-062ebd9f44fd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.859229170Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706650750859215285,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=dbe65425-85c3-4edd-b45c-062ebd9f44fd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.860006977Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aff553c6-68ec-491d-8d94-a4af2624b1b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.860053821Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aff553c6-68ec-491d-8d94-a4af2624b1b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.860272457Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:730e79c1bb10bf67247a4cbd57c81ece688b4373cc9c1ed61c3e8d6a06b9eace,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706650560571668782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfcf2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55166ff49cef07a4a8210b010ef0a79d17b1834e023b7a8bd4795d5334fb1bec,PodSandboxId:4d65723bc7374d1b29974f5ca907368b5e32a2e8e774f3b1febaf1af371bf299,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706650546997960087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-zdhbw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e4a21d6-a197-4a33-870a-840e3d20436f,},Annotations:map[string]string{io.kubernetes.container.hash: 45d5b794,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ee4def490de224ba91b54224fe8b11febec674751cce2a84e884c93d6c307b,PodSandboxId:4576bc6aa072c3f32b72a3573032d8e5b5d4e18df9fdf4d8eaae75658d0d2f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706650545789213441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2jstl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 330bb1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42aa494e9ae46b3b0113939035c49c45325d0356b16686f30d01cda1eafef4c,PodSandboxId:7da3abfe94d978d81f166c568a6ce93d21cc3612fdb250a8d959a372068bbdb7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706650532689086550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zt7wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 49dc74c8-c0dc-4421-99f2-b40bcf3429ff,},Annotations:map[string]string{io.kubernetes.container.hash: b411c604,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b7dfd980813e40a6df79225505d83739fb11d56f3a3364814f69b8ad62b7d1,PodSandboxId:c8aefe01d9eb795ae65abcf5a17d43ce07f91eb65d2746a9f073bd7ffb111d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706650530374947040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-49rq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63c8c4a9-2d5e-4aca-b3f4-b239c2
adcfa3,},Annotations:map[string]string{io.kubernetes.container.hash: e4e46879,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3194c5077033b9eb54f0c7c20e70767aaa6cf9f7abd36f8fc71b7454b4eee41,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706650530220400997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfc
f2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b47c64886b0fa2bae9e5b6e67c3f8a4242f63c57da5914ec03816eb79c61c6,PodSandboxId:ad0566cab72540d02eb73ce6b4e2f890c876cffd2e5a30df791496c97bff1703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706650523999464233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48930a2236670664c600a427fcb648de,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e05f2512a78a4b523bbd92de0f9eb1d058550b3729b7dcd6647f651c9bd4bb,PodSandboxId:08646a592cb3ea016d6bbef047c55cd7e8cb49e1d2bc5f02897e169040bac216,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706650523763765409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 200d5f8761e2576886603d5ccbacc15d,},Annotations:map[string]string{io.kubernetes.container.has
h: 5aaaf365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17938d324bae39ab1be9550e25fdcc9a6843032ff0a51a0eef3eaaead23196f6,PodSandboxId:1dd2c3d821cb6888a45e1b75852b4aef2e432bab44af2a264ed79b80ccf4922c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706650523454680032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12df51f0cf7fec96f3664a9ee0f4186,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc298a9b9e362aff4646aa9e27bc0ec96d09b5ac407603d8cd2b3451ce20f517,PodSandboxId:5f6c028ada9164fc0986fe33b8c660fac233611ad13a2c7d30400c9421df27a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706650523374959577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3639010d361b30109ef6c46b132307,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aff553c6-68ec-491d-8d94-a4af2624b1b0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.897225074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=19fedf83-54fb-4f3a-bab9-cfda7c359fa2 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.897279802Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=19fedf83-54fb-4f3a-bab9-cfda7c359fa2 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.898839576Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=432b741a-b33a-4ca7-aa08-1840fec3fcc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.899256837Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706650750899242300,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=432b741a-b33a-4ca7-aa08-1840fec3fcc1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.899761770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c841b4ff-395f-4476-b6d0-36fedbffc181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.899804497Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c841b4ff-395f-4476-b6d0-36fedbffc181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.900061552Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:730e79c1bb10bf67247a4cbd57c81ece688b4373cc9c1ed61c3e8d6a06b9eace,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706650560571668782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfcf2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55166ff49cef07a4a8210b010ef0a79d17b1834e023b7a8bd4795d5334fb1bec,PodSandboxId:4d65723bc7374d1b29974f5ca907368b5e32a2e8e774f3b1febaf1af371bf299,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706650546997960087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-zdhbw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e4a21d6-a197-4a33-870a-840e3d20436f,},Annotations:map[string]string{io.kubernetes.container.hash: 45d5b794,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ee4def490de224ba91b54224fe8b11febec674751cce2a84e884c93d6c307b,PodSandboxId:4576bc6aa072c3f32b72a3573032d8e5b5d4e18df9fdf4d8eaae75658d0d2f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706650545789213441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2jstl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 330bb1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42aa494e9ae46b3b0113939035c49c45325d0356b16686f30d01cda1eafef4c,PodSandboxId:7da3abfe94d978d81f166c568a6ce93d21cc3612fdb250a8d959a372068bbdb7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706650532689086550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zt7wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 49dc74c8-c0dc-4421-99f2-b40bcf3429ff,},Annotations:map[string]string{io.kubernetes.container.hash: b411c604,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b7dfd980813e40a6df79225505d83739fb11d56f3a3364814f69b8ad62b7d1,PodSandboxId:c8aefe01d9eb795ae65abcf5a17d43ce07f91eb65d2746a9f073bd7ffb111d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706650530374947040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-49rq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63c8c4a9-2d5e-4aca-b3f4-b239c2
adcfa3,},Annotations:map[string]string{io.kubernetes.container.hash: e4e46879,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3194c5077033b9eb54f0c7c20e70767aaa6cf9f7abd36f8fc71b7454b4eee41,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706650530220400997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfc
f2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b47c64886b0fa2bae9e5b6e67c3f8a4242f63c57da5914ec03816eb79c61c6,PodSandboxId:ad0566cab72540d02eb73ce6b4e2f890c876cffd2e5a30df791496c97bff1703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706650523999464233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48930a2236670664c600a427fcb648de,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e05f2512a78a4b523bbd92de0f9eb1d058550b3729b7dcd6647f651c9bd4bb,PodSandboxId:08646a592cb3ea016d6bbef047c55cd7e8cb49e1d2bc5f02897e169040bac216,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706650523763765409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 200d5f8761e2576886603d5ccbacc15d,},Annotations:map[string]string{io.kubernetes.container.has
h: 5aaaf365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17938d324bae39ab1be9550e25fdcc9a6843032ff0a51a0eef3eaaead23196f6,PodSandboxId:1dd2c3d821cb6888a45e1b75852b4aef2e432bab44af2a264ed79b80ccf4922c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706650523454680032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12df51f0cf7fec96f3664a9ee0f4186,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc298a9b9e362aff4646aa9e27bc0ec96d09b5ac407603d8cd2b3451ce20f517,PodSandboxId:5f6c028ada9164fc0986fe33b8c660fac233611ad13a2c7d30400c9421df27a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706650523374959577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3639010d361b30109ef6c46b132307,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c841b4ff-395f-4476-b6d0-36fedbffc181 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.935867069Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=3f7dc64e-9990-42fc-95c0-2f51fd55ebe2 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.935932372Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=3f7dc64e-9990-42fc-95c0-2f51fd55ebe2 name=/runtime.v1.RuntimeService/Version
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.937184965Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6dc7e465-e97a-4f76-b61a-015799a7da74 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.937652883Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706650750937637977,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125543,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6dc7e465-e97a-4f76-b61a-015799a7da74 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.938457110Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cd406793-b6cc-46ba-a829-17533ddb6a78 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.938499516Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cd406793-b6cc-46ba-a829-17533ddb6a78 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 21:39:10 multinode-721181 crio[715]: time="2024-01-30 21:39:10.938741746Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:730e79c1bb10bf67247a4cbd57c81ece688b4373cc9c1ed61c3e8d6a06b9eace,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706650560571668782,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfcf2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55166ff49cef07a4a8210b010ef0a79d17b1834e023b7a8bd4795d5334fb1bec,PodSandboxId:4d65723bc7374d1b29974f5ca907368b5e32a2e8e774f3b1febaf1af371bf299,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1706650546997960087,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-zdhbw,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 3e4a21d6-a197-4a33-870a-840e3d20436f,},Annotations:map[string]string{io.kubernetes.container.hash: 45d5b794,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b7ee4def490de224ba91b54224fe8b11febec674751cce2a84e884c93d6c307b,PodSandboxId:4576bc6aa072c3f32b72a3573032d8e5b5d4e18df9fdf4d8eaae75658d0d2f95,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706650545789213441,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2jstl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2,},Annotations:map[string]string{io.kubernetes.container.hash: 330bb1ca,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e42aa494e9ae46b3b0113939035c49c45325d0356b16686f30d01cda1eafef4c,PodSandboxId:7da3abfe94d978d81f166c568a6ce93d21cc3612fdb250a8d959a372068bbdb7,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052,State:CONTAINER_RUNNING,CreatedAt:1706650532689086550,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-zt7wg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: 49dc74c8-c0dc-4421-99f2-b40bcf3429ff,},Annotations:map[string]string{io.kubernetes.container.hash: b411c604,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c2b7dfd980813e40a6df79225505d83739fb11d56f3a3364814f69b8ad62b7d1,PodSandboxId:c8aefe01d9eb795ae65abcf5a17d43ce07f91eb65d2746a9f073bd7ffb111d82,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706650530374947040,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-49rq4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 63c8c4a9-2d5e-4aca-b3f4-b239c2
adcfa3,},Annotations:map[string]string{io.kubernetes.container.hash: e4e46879,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3194c5077033b9eb54f0c7c20e70767aaa6cf9f7abd36f8fc71b7454b4eee41,PodSandboxId:ba72bbd150b1ce0b96ac428f050714fe01b4505e0a1d35e7ae956d0a5825c815,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706650530220400997,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5f9b77ce-6169-4580-ae1c-04759bfc
f2d7,},Annotations:map[string]string{io.kubernetes.container.hash: a85f6380,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e0b47c64886b0fa2bae9e5b6e67c3f8a4242f63c57da5914ec03816eb79c61c6,PodSandboxId:ad0566cab72540d02eb73ce6b4e2f890c876cffd2e5a30df791496c97bff1703,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706650523999464233,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 48930a2236670664c600a427fcb648de,},Annot
ations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:55e05f2512a78a4b523bbd92de0f9eb1d058550b3729b7dcd6647f651c9bd4bb,PodSandboxId:08646a592cb3ea016d6bbef047c55cd7e8cb49e1d2bc5f02897e169040bac216,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706650523763765409,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 200d5f8761e2576886603d5ccbacc15d,},Annotations:map[string]string{io.kubernetes.container.has
h: 5aaaf365,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:17938d324bae39ab1be9550e25fdcc9a6843032ff0a51a0eef3eaaead23196f6,PodSandboxId:1dd2c3d821cb6888a45e1b75852b4aef2e432bab44af2a264ed79b80ccf4922c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706650523454680032,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f12df51f0cf7fec96f3664a9ee0f4186,},Annotations:map[string]string{io.kubernetes.container.hash: f8aebfca,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc298a9b9e362aff4646aa9e27bc0ec96d09b5ac407603d8cd2b3451ce20f517,PodSandboxId:5f6c028ada9164fc0986fe33b8c660fac233611ad13a2c7d30400c9421df27a3,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706650523374959577,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-721181,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1e3639010d361b30109ef6c46b132307,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cd406793-b6cc-46ba-a829-17533ddb6a78 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	730e79c1bb10b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   ba72bbd150b1c       storage-provisioner
	55166ff49cef0       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   4d65723bc7374       busybox-5b5d89c9d6-zdhbw
	b7ee4def490de       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   4576bc6aa072c       coredns-5dd5756b68-2jstl
	e42aa494e9ae4       c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc                                      3 minutes ago       Running             kindnet-cni               1                   7da3abfe94d97       kindnet-zt7wg
	c2b7dfd980813       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   c8aefe01d9eb7       kube-proxy-49rq4
	e3194c5077033       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   ba72bbd150b1c       storage-provisioner
	e0b47c64886b0       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   ad0566cab7254       kube-scheduler-multinode-721181
	55e05f2512a78       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   08646a592cb3e       etcd-multinode-721181
	17938d324bae3       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   1dd2c3d821cb6       kube-apiserver-multinode-721181
	dc298a9b9e362       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   5f6c028ada916       kube-controller-manager-multinode-721181
	
	
	==> coredns [b7ee4def490de224ba91b54224fe8b11febec674751cce2a84e884c93d6c307b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:50154 - 61086 "HINFO IN 3770930361926203476.3874221606329534379. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.049041136s
	
	
	==> describe nodes <==
	Name:               multinode-721181
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-721181
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=multinode-721181
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T21_24_58_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 21:24:54 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-721181
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 21:39:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 21:35:59 +0000   Tue, 30 Jan 2024 21:24:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 21:35:59 +0000   Tue, 30 Jan 2024 21:24:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 21:35:59 +0000   Tue, 30 Jan 2024 21:24:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 21:35:59 +0000   Tue, 30 Jan 2024 21:35:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.174
	  Hostname:    multinode-721181
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a8f9e7bf490c48aeae4082adf7601a00
	  System UUID:                a8f9e7bf-490c-48ae-ae40-82adf7601a00
	  Boot ID:                    88859189-e138-499a-a856-bf0da5067f06
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-zdhbw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-5dd5756b68-2jstl                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-multinode-721181                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-zt7wg                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-multinode-721181             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-721181    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-49rq4                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-multinode-721181             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-721181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-721181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-721181 status is now: NodeHasSufficientPID
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           14m                    node-controller  Node multinode-721181 event: Registered Node multinode-721181 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-721181 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-721181 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-721181 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-721181 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m30s                  node-controller  Node multinode-721181 event: Registered Node multinode-721181 in Controller
	
	
	Name:               multinode-721181-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-721181-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=multinode-721181
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_30T21_39_07_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 21:37:26 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-721181-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 21:39:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 21:37:26 +0000   Tue, 30 Jan 2024 21:37:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 21:37:26 +0000   Tue, 30 Jan 2024 21:37:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 21:37:26 +0000   Tue, 30 Jan 2024 21:37:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 21:37:26 +0000   Tue, 30 Jan 2024 21:37:26 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.69
	  Hostname:    multinode-721181-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1aebc2a6efa4a14a4fa2509855690ec
	  System UUID:                a1aebc2a-6efa-4a14-a4fa-2509855690ec
	  Boot ID:                    9446f62c-d5f0-4829-a366-59dfb8faaa7c
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-784g8    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-8thzp               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-s9pwd            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From        Message
	  ----     ------                   ----                 ----        -------
	  Normal   Starting                 13m                  kube-proxy  
	  Normal   Starting                 103s                 kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)    kubelet     Node multinode-721181-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)    kubelet     Node multinode-721181-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)    kubelet     Node multinode-721181-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                  kubelet     Node multinode-721181-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m52s                kubelet     Node multinode-721181-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m8s (x2 over 3m8s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)  kubelet     Node multinode-721181-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)  kubelet     Node multinode-721181-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)  kubelet     Node multinode-721181-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                 kubelet     Node multinode-721181-m02 status is now: NodeReady
	
	
	Name:               multinode-721181-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-721181-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=multinode-721181
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_01_30T21_39_07_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 21:39:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-721181-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 21:39:06 +0000   Tue, 30 Jan 2024 21:39:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 21:39:06 +0000   Tue, 30 Jan 2024 21:39:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 21:39:06 +0000   Tue, 30 Jan 2024 21:39:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 21:39:06 +0000   Tue, 30 Jan 2024 21:39:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.218
	  Hostname:    multinode-721181-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 7391f1897f7a4b74871acb1893049bd3
	  System UUID:                7391f189-7f7a-4b74-871a-cb1893049bd3
	  Boot ID:                    5081f183-27c5-48fb-b1a1-86fccb2099d0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-rgkc4    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-qxwqk               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-lwg96            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 5s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-721181-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-721181-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  11m (x2 over 11m)  kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m (x2 over 11m)  kubelet     Node multinode-721181-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x2 over 11m)  kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m                kubelet     Node multinode-721181-m03 status is now: NodeReady
	  Normal   NodeNotReady             70s                kubelet     Node multinode-721181-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        39s (x2 over 99s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 5s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  5s (x2 over 5s)    kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x2 over 5s)    kubelet     Node multinode-721181-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x2 over 5s)    kubelet     Node multinode-721181-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                5s                 kubelet     Node multinode-721181-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[Jan30 21:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066828] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.355264] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.355400] systemd-fstab-generator[115]: Ignoring "noauto" for root device
	[  +0.135852] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.479624] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Jan30 21:35] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.110401] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.133954] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.104328] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.206570] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +16.717030] systemd-fstab-generator[911]: Ignoring "noauto" for root device
	[ +19.515857] kauditd_printk_skb: 18 callbacks suppressed
	
	
	==> etcd [55e05f2512a78a4b523bbd92de0f9eb1d058550b3729b7dcd6647f651c9bd4bb] <==
	{"level":"info","ts":"2024-01-30T21:35:25.584159Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-30T21:35:25.584186Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-01-30T21:35:25.584537Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 switched to configuration voters=(8283008283800597511)"}
	{"level":"info","ts":"2024-01-30T21:35:25.584724Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","added-peer-id":"72f328261b8d7407","added-peer-peer-urls":["https://192.168.39.174:2380"]}
	{"level":"info","ts":"2024-01-30T21:35:25.584929Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3f65b9220f75d9a5","local-member-id":"72f328261b8d7407","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T21:35:25.589687Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T21:35:25.591025Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-30T21:35:25.591233Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"72f328261b8d7407","initial-advertise-peer-urls":["https://192.168.39.174:2380"],"listen-peer-urls":["https://192.168.39.174:2380"],"advertise-client-urls":["https://192.168.39.174:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.174:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T21:35:25.591319Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T21:35:25.591432Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-01-30T21:35:25.591438Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.174:2380"}
	{"level":"info","ts":"2024-01-30T21:35:27.251283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 is starting a new election at term 2"}
	{"level":"info","ts":"2024-01-30T21:35:27.25139Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-01-30T21:35:27.251439Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgPreVoteResp from 72f328261b8d7407 at term 2"}
	{"level":"info","ts":"2024-01-30T21:35:27.251475Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became candidate at term 3"}
	{"level":"info","ts":"2024-01-30T21:35:27.2515Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 received MsgVoteResp from 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-01-30T21:35:27.251562Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"72f328261b8d7407 became leader at term 3"}
	{"level":"info","ts":"2024-01-30T21:35:27.251672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 72f328261b8d7407 elected leader 72f328261b8d7407 at term 3"}
	{"level":"info","ts":"2024-01-30T21:35:27.254554Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"72f328261b8d7407","local-member-attributes":"{Name:multinode-721181 ClientURLs:[https://192.168.39.174:2379]}","request-path":"/0/members/72f328261b8d7407/attributes","cluster-id":"3f65b9220f75d9a5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T21:35:27.254701Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T21:35:27.255031Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T21:35:27.255079Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T21:35:27.254722Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T21:35:27.256252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.174:2379"}
	{"level":"info","ts":"2024-01-30T21:35:27.256476Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:39:11 up 4 min,  0 users,  load average: 0.10, 0.16, 0.08
	Linux multinode-721181 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kindnet [e42aa494e9ae46b3b0113939035c49c45325d0356b16686f30d01cda1eafef4c] <==
	I0130 21:38:24.323272       1 main.go:250] Node multinode-721181-m03 has CIDR [10.244.3.0/24] 
	I0130 21:38:34.335733       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0130 21:38:34.335926       1 main.go:227] handling current node
	I0130 21:38:34.335966       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I0130 21:38:34.335993       1 main.go:250] Node multinode-721181-m02 has CIDR [10.244.1.0/24] 
	I0130 21:38:34.336165       1 main.go:223] Handling node with IPs: map[192.168.39.218:{}]
	I0130 21:38:34.336201       1 main.go:250] Node multinode-721181-m03 has CIDR [10.244.3.0/24] 
	I0130 21:38:44.351720       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0130 21:38:44.352023       1 main.go:227] handling current node
	I0130 21:38:44.352062       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I0130 21:38:44.352271       1 main.go:250] Node multinode-721181-m02 has CIDR [10.244.1.0/24] 
	I0130 21:38:44.352686       1 main.go:223] Handling node with IPs: map[192.168.39.218:{}]
	I0130 21:38:44.352799       1 main.go:250] Node multinode-721181-m03 has CIDR [10.244.3.0/24] 
	I0130 21:38:54.357422       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0130 21:38:54.357485       1 main.go:227] handling current node
	I0130 21:38:54.357496       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I0130 21:38:54.357502       1 main.go:250] Node multinode-721181-m02 has CIDR [10.244.1.0/24] 
	I0130 21:38:54.357751       1 main.go:223] Handling node with IPs: map[192.168.39.218:{}]
	I0130 21:38:54.357788       1 main.go:250] Node multinode-721181-m03 has CIDR [10.244.3.0/24] 
	I0130 21:39:04.370062       1 main.go:223] Handling node with IPs: map[192.168.39.174:{}]
	I0130 21:39:04.370189       1 main.go:227] handling current node
	I0130 21:39:04.370282       1 main.go:223] Handling node with IPs: map[192.168.39.69:{}]
	I0130 21:39:04.370405       1 main.go:250] Node multinode-721181-m02 has CIDR [10.244.1.0/24] 
	I0130 21:39:04.370649       1 main.go:223] Handling node with IPs: map[192.168.39.218:{}]
	I0130 21:39:04.370711       1 main.go:250] Node multinode-721181-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [17938d324bae39ab1be9550e25fdcc9a6843032ff0a51a0eef3eaaead23196f6] <==
	I0130 21:35:28.584853       1 establishing_controller.go:76] Starting EstablishingController
	I0130 21:35:28.584864       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0130 21:35:28.584874       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0130 21:35:28.584887       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0130 21:35:28.630123       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0130 21:35:28.637101       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0130 21:35:28.638704       1 aggregator.go:166] initial CRD sync complete...
	I0130 21:35:28.638748       1 autoregister_controller.go:141] Starting autoregister controller
	I0130 21:35:28.638772       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0130 21:35:28.650829       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0130 21:35:28.676797       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0130 21:35:28.676839       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0130 21:35:28.676931       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0130 21:35:28.716343       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0130 21:35:28.722754       1 shared_informer.go:318] Caches are synced for configmaps
	I0130 21:35:28.722851       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0130 21:35:28.738829       1 cache.go:39] Caches are synced for autoregister controller
	I0130 21:35:29.539994       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0130 21:35:31.514065       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0130 21:35:31.665919       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0130 21:35:31.679449       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0130 21:35:31.740142       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0130 21:35:31.749907       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0130 21:35:41.491258       1 controller.go:624] quota admission added evaluator for: endpoints
	I0130 21:35:41.549703       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [dc298a9b9e362aff4646aa9e27bc0ec96d09b5ac407603d8cd2b3451ce20f517] <==
	I0130 21:37:26.278194       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m03"
	I0130 21:37:26.278407       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-721181-m02\" does not exist"
	I0130 21:37:26.279401       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-9gv46" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-9gv46"
	I0130 21:37:26.293957       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-721181-m02" podCIDRs=["10.244.1.0/24"]
	I0130 21:37:26.412988       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m02"
	I0130 21:37:27.179787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="74.147µs"
	I0130 21:37:40.454379       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="153.468µs"
	I0130 21:37:41.058288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="66.747µs"
	I0130 21:37:41.061675       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="113.404µs"
	I0130 21:38:01.364798       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m02"
	I0130 21:39:03.080760       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5b5d89c9d6-784g8"
	I0130 21:39:03.096704       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="34.12117ms"
	I0130 21:39:03.114449       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="17.404435ms"
	I0130 21:39:03.115058       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="281.806µs"
	I0130 21:39:04.311392       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.719822ms"
	I0130 21:39:04.311785       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="63.452µs"
	I0130 21:39:05.250979       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="59.772µs"
	I0130 21:39:06.085791       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m02"
	I0130 21:39:06.561368       1 event.go:307] "Event occurred" object="multinode-721181-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-721181-m03 event: Removing Node multinode-721181-m03 from Controller"
	I0130 21:39:06.736159       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m02"
	I0130 21:39:06.736235       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-721181-m03\" does not exist"
	I0130 21:39:06.736261       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-rgkc4" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-rgkc4"
	I0130 21:39:06.752308       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-721181-m03" podCIDRs=["10.244.2.0/24"]
	I0130 21:39:06.800351       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-721181-m03"
	I0130 21:39:07.632423       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="97.52µs"
	
	
	==> kube-proxy [c2b7dfd980813e40a6df79225505d83739fb11d56f3a3364814f69b8ad62b7d1] <==
	I0130 21:35:30.961848       1 server_others.go:69] "Using iptables proxy"
	I0130 21:35:30.994378       1 node.go:141] Successfully retrieved node IP: 192.168.39.174
	I0130 21:35:31.283904       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 21:35:31.284044       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 21:35:31.302727       1 server_others.go:152] "Using iptables Proxier"
	I0130 21:35:31.302813       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 21:35:31.302980       1 server.go:846] "Version info" version="v1.28.4"
	I0130 21:35:31.303175       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 21:35:31.304068       1 config.go:188] "Starting service config controller"
	I0130 21:35:31.304148       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 21:35:31.304820       1 config.go:97] "Starting endpoint slice config controller"
	I0130 21:35:31.304921       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 21:35:31.312084       1 config.go:315] "Starting node config controller"
	I0130 21:35:31.312119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 21:35:31.406461       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 21:35:31.406479       1 shared_informer.go:318] Caches are synced for service config
	I0130 21:35:31.412503       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [e0b47c64886b0fa2bae9e5b6e67c3f8a4242f63c57da5914ec03816eb79c61c6] <==
	I0130 21:35:25.891946       1 serving.go:348] Generated self-signed cert in-memory
	W0130 21:35:28.620948       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 21:35:28.620997       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 21:35:28.621007       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 21:35:28.621014       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 21:35:28.665936       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0130 21:35:28.666092       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 21:35:28.682380       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0130 21:35:28.682495       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0130 21:35:28.685108       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0130 21:35:28.682905       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0130 21:35:28.785957       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 21:34:54 UTC, ends at Tue 2024-01-30 21:39:11 UTC. --
	Jan 30 21:35:33 multinode-721181 kubelet[917]: E0130 21:35:33.036099     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e4a21d6-a197-4a33-870a-840e3d20436f-kube-api-access-sp2vn podName:3e4a21d6-a197-4a33-870a-840e3d20436f nodeName:}" failed. No retries permitted until 2024-01-30 21:35:37.036086206 +0000 UTC m=+14.916634877 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-sp2vn" (UniqueName: "kubernetes.io/projected/3e4a21d6-a197-4a33-870a-840e3d20436f-kube-api-access-sp2vn") pod "busybox-5b5d89c9d6-zdhbw" (UID: "3e4a21d6-a197-4a33-870a-840e3d20436f") : object "default"/"kube-root-ca.crt" not registered
	Jan 30 21:35:33 multinode-721181 kubelet[917]: E0130 21:35:33.373236     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-2jstl" podUID="9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2"
	Jan 30 21:35:33 multinode-721181 kubelet[917]: E0130 21:35:33.373322     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-zdhbw" podUID="3e4a21d6-a197-4a33-870a-840e3d20436f"
	Jan 30 21:35:35 multinode-721181 kubelet[917]: E0130 21:35:35.373710     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-zdhbw" podUID="3e4a21d6-a197-4a33-870a-840e3d20436f"
	Jan 30 21:35:35 multinode-721181 kubelet[917]: E0130 21:35:35.373822     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-2jstl" podUID="9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2"
	Jan 30 21:35:36 multinode-721181 kubelet[917]: E0130 21:35:36.969048     917 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jan 30 21:35:36 multinode-721181 kubelet[917]: E0130 21:35:36.969149     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2-config-volume podName:9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2 nodeName:}" failed. No retries permitted until 2024-01-30 21:35:44.969125626 +0000 UTC m=+22.849674284 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2-config-volume") pod "coredns-5dd5756b68-2jstl" (UID: "9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2") : object "kube-system"/"coredns" not registered
	Jan 30 21:35:37 multinode-721181 kubelet[917]: E0130 21:35:37.070209     917 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jan 30 21:35:37 multinode-721181 kubelet[917]: E0130 21:35:37.070276     917 projected.go:198] Error preparing data for projected volume kube-api-access-sp2vn for pod default/busybox-5b5d89c9d6-zdhbw: object "default"/"kube-root-ca.crt" not registered
	Jan 30 21:35:37 multinode-721181 kubelet[917]: E0130 21:35:37.070338     917 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e4a21d6-a197-4a33-870a-840e3d20436f-kube-api-access-sp2vn podName:3e4a21d6-a197-4a33-870a-840e3d20436f nodeName:}" failed. No retries permitted until 2024-01-30 21:35:45.070323377 +0000 UTC m=+22.950872045 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-sp2vn" (UniqueName: "kubernetes.io/projected/3e4a21d6-a197-4a33-870a-840e3d20436f-kube-api-access-sp2vn") pod "busybox-5b5d89c9d6-zdhbw" (UID: "3e4a21d6-a197-4a33-870a-840e3d20436f") : object "default"/"kube-root-ca.crt" not registered
	Jan 30 21:35:37 multinode-721181 kubelet[917]: E0130 21:35:37.373081     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5b5d89c9d6-zdhbw" podUID="3e4a21d6-a197-4a33-870a-840e3d20436f"
	Jan 30 21:35:37 multinode-721181 kubelet[917]: E0130 21:35:37.373215     917 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-2jstl" podUID="9148a810-0d3a-4de7-a0a9-5a6ce49d8ba2"
	Jan 30 21:36:00 multinode-721181 kubelet[917]: I0130 21:36:00.547830     917 scope.go:117] "RemoveContainer" containerID="e3194c5077033b9eb54f0c7c20e70767aaa6cf9f7abd36f8fc71b7454b4eee41"
	Jan 30 21:36:22 multinode-721181 kubelet[917]: E0130 21:36:22.391751     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 21:36:22 multinode-721181 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 21:36:22 multinode-721181 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 21:36:22 multinode-721181 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 21:37:22 multinode-721181 kubelet[917]: E0130 21:37:22.394964     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 21:37:22 multinode-721181 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 21:37:22 multinode-721181 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 21:37:22 multinode-721181 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 21:38:22 multinode-721181 kubelet[917]: E0130 21:38:22.392846     917 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 21:38:22 multinode-721181 kubelet[917]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 21:38:22 multinode-721181 kubelet[917]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 21:38:22 multinode-721181 kubelet[917]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-721181 -n multinode-721181
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-721181 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (687.97s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (142.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 stop
E0130 21:39:25.158899  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:39:32.716546  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
multinode_test.go:342: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-721181 stop: exit status 82 (2m0.283457958s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-721181"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:344: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-721181 stop": exit status 82
multinode_test.go:348: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status
multinode_test.go:348: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-721181 status: exit status 3 (18.704440986s)

                                                
                                                
-- stdout --
	multinode-721181
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-721181-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 21:41:33.161855  666404 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E0130 21:41:33.161905  666404 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:351: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-721181 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-721181 -n multinode-721181
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-721181 -n multinode-721181: exit status 3 (3.196476193s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 21:41:36.521893  666511 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host
	E0130 21:41:36.521920  666511 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.174:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-721181" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (142.18s)

                                                
                                    
x
+
TestPreload (279.38s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-664476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0130 21:51:52.587154  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-664476 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m18.262565318s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-664476 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-664476 image pull gcr.io/k8s-minikube/busybox: (1.084153681s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-664476
E0130 21:52:35.764193  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-664476: exit status 82 (2m0.275863082s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-664476"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-664476 failed: exit status 82
panic.go:523: *** TestPreload FAILED at 2024-01-30 21:54:07.376666107 +0000 UTC m=+3218.777608225
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-664476 -n test-preload-664476
E0130 21:54:25.158600  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-664476 -n test-preload-664476: exit status 3 (18.654549742s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 21:54:26.025821  669464 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.75:22: connect: no route to host
	E0130 21:54:26.025841  669464 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.75:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-664476" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-664476" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-664476
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-664476: (1.105658391s)
--- FAIL: TestPreload (279.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (138.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-912992 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-912992 --alsologtostderr -v=3: exit status 82 (2m0.285683819s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-912992"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 22:05:51.387536  679605 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:05:51.387721  679605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:05:51.387732  679605 out.go:309] Setting ErrFile to fd 2...
	I0130 22:05:51.387739  679605 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:05:51.387952  679605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:05:51.388237  679605 out.go:303] Setting JSON to false
	I0130 22:05:51.388360  679605 mustload.go:65] Loading cluster: old-k8s-version-912992
	I0130 22:05:51.388788  679605 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:05:51.388888  679605 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/config.json ...
	I0130 22:05:51.389076  679605 mustload.go:65] Loading cluster: old-k8s-version-912992
	I0130 22:05:51.389229  679605 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:05:51.389271  679605 stop.go:39] StopHost: old-k8s-version-912992
	I0130 22:05:51.389791  679605 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:05:51.389852  679605 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:05:51.406144  679605 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39341
	I0130 22:05:51.406728  679605 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:05:51.407361  679605 main.go:141] libmachine: Using API Version  1
	I0130 22:05:51.407389  679605 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:05:51.407771  679605 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:05:51.410313  679605 out.go:177] * Stopping node "old-k8s-version-912992"  ...
	I0130 22:05:51.411550  679605 main.go:141] libmachine: Stopping "old-k8s-version-912992"...
	I0130 22:05:51.411566  679605 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:05:51.413614  679605 main.go:141] libmachine: (old-k8s-version-912992) Calling .Stop
	I0130 22:05:51.417137  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 0/120
	I0130 22:05:52.418394  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 1/120
	I0130 22:05:53.420726  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 2/120
	I0130 22:05:54.422117  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 3/120
	I0130 22:05:55.424584  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 4/120
	I0130 22:05:56.426866  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 5/120
	I0130 22:05:57.428355  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 6/120
	I0130 22:05:58.429554  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 7/120
	I0130 22:05:59.430983  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 8/120
	I0130 22:06:00.433131  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 9/120
	I0130 22:06:01.435319  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 10/120
	I0130 22:06:02.436992  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 11/120
	I0130 22:06:03.438671  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 12/120
	I0130 22:06:04.440388  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 13/120
	I0130 22:06:05.441719  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 14/120
	I0130 22:06:06.443815  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 15/120
	I0130 22:06:07.445258  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 16/120
	I0130 22:06:08.446726  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 17/120
	I0130 22:06:09.448187  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 18/120
	I0130 22:06:10.449615  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 19/120
	I0130 22:06:11.451533  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 20/120
	I0130 22:06:12.452861  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 21/120
	I0130 22:06:13.454024  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 22/120
	I0130 22:06:14.456253  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 23/120
	I0130 22:06:15.457649  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 24/120
	I0130 22:06:16.459252  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 25/120
	I0130 22:06:17.460732  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 26/120
	I0130 22:06:18.461902  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 27/120
	I0130 22:06:19.463872  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 28/120
	I0130 22:06:20.465595  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 29/120
	I0130 22:06:21.467739  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 30/120
	I0130 22:06:22.469526  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 31/120
	I0130 22:06:23.470989  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 32/120
	I0130 22:06:24.472192  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 33/120
	I0130 22:06:25.473766  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 34/120
	I0130 22:06:26.475476  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 35/120
	I0130 22:06:27.476783  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 36/120
	I0130 22:06:28.478738  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 37/120
	I0130 22:06:29.480130  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 38/120
	I0130 22:06:30.481393  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 39/120
	I0130 22:06:31.482985  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 40/120
	I0130 22:06:32.484291  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 41/120
	I0130 22:06:33.486122  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 42/120
	I0130 22:06:34.487917  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 43/120
	I0130 22:06:35.488989  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 44/120
	I0130 22:06:36.490669  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 45/120
	I0130 22:06:37.492253  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 46/120
	I0130 22:06:38.493587  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 47/120
	I0130 22:06:39.494966  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 48/120
	I0130 22:06:40.496354  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 49/120
	I0130 22:06:41.498536  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 50/120
	I0130 22:06:42.499747  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 51/120
	I0130 22:06:43.501059  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 52/120
	I0130 22:06:44.502295  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 53/120
	I0130 22:06:45.503641  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 54/120
	I0130 22:06:46.505542  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 55/120
	I0130 22:06:47.506836  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 56/120
	I0130 22:06:48.508294  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 57/120
	I0130 22:06:49.509524  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 58/120
	I0130 22:06:50.510697  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 59/120
	I0130 22:06:51.512739  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 60/120
	I0130 22:06:52.513978  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 61/120
	I0130 22:06:53.515819  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 62/120
	I0130 22:06:54.517174  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 63/120
	I0130 22:06:55.518468  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 64/120
	I0130 22:06:56.520205  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 65/120
	I0130 22:06:57.521483  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 66/120
	I0130 22:06:58.522706  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 67/120
	I0130 22:06:59.523893  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 68/120
	I0130 22:07:00.525352  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 69/120
	I0130 22:07:01.527337  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 70/120
	I0130 22:07:02.528577  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 71/120
	I0130 22:07:03.529971  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 72/120
	I0130 22:07:04.531249  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 73/120
	I0130 22:07:05.532678  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 74/120
	I0130 22:07:06.534571  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 75/120
	I0130 22:07:07.535811  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 76/120
	I0130 22:07:08.537003  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 77/120
	I0130 22:07:09.538234  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 78/120
	I0130 22:07:10.539556  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 79/120
	I0130 22:07:11.541438  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 80/120
	I0130 22:07:12.542846  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 81/120
	I0130 22:07:13.544049  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 82/120
	I0130 22:07:14.545405  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 83/120
	I0130 22:07:15.546636  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 84/120
	I0130 22:07:16.548475  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 85/120
	I0130 22:07:17.549771  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 86/120
	I0130 22:07:18.551005  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 87/120
	I0130 22:07:19.552379  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 88/120
	I0130 22:07:20.553547  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 89/120
	I0130 22:07:21.555491  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 90/120
	I0130 22:07:22.556719  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 91/120
	I0130 22:07:23.557964  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 92/120
	I0130 22:07:24.559171  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 93/120
	I0130 22:07:25.560592  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 94/120
	I0130 22:07:26.562752  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 95/120
	I0130 22:07:27.563979  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 96/120
	I0130 22:07:28.565449  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 97/120
	I0130 22:07:29.566806  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 98/120
	I0130 22:07:30.568278  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 99/120
	I0130 22:07:31.570748  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 100/120
	I0130 22:07:32.572055  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 101/120
	I0130 22:07:33.573388  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 102/120
	I0130 22:07:34.574674  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 103/120
	I0130 22:07:35.576161  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 104/120
	I0130 22:07:36.578305  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 105/120
	I0130 22:07:37.579506  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 106/120
	I0130 22:07:38.581199  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 107/120
	I0130 22:07:39.582724  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 108/120
	I0130 22:07:40.584051  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 109/120
	I0130 22:07:41.586192  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 110/120
	I0130 22:07:42.587434  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 111/120
	I0130 22:07:43.588803  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 112/120
	I0130 22:07:44.590163  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 113/120
	I0130 22:07:45.591692  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 114/120
	I0130 22:07:46.593662  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 115/120
	I0130 22:07:47.594897  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 116/120
	I0130 22:07:48.597010  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 117/120
	I0130 22:07:49.598378  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 118/120
	I0130 22:07:50.599908  679605 main.go:141] libmachine: (old-k8s-version-912992) Waiting for machine to stop 119/120
	I0130 22:07:51.600388  679605 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 22:07:51.600485  679605 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 22:07:51.602865  679605 out.go:177] 
	W0130 22:07:51.604375  679605 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 22:07:51.604394  679605 out.go:239] * 
	* 
	W0130 22:07:51.607773  679605 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 22:07:51.609323  679605 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-912992 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992: exit status 3 (18.478911416s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:10.089822  680268 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0130 22:08:10.089845  680268 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-912992" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (138.77s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (138.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-023824 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-023824 --alsologtostderr -v=3: exit status 82 (2m0.281972857s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-023824"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 22:06:13.271861  679823 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:06:13.271979  679823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:13.271990  679823 out.go:309] Setting ErrFile to fd 2...
	I0130 22:06:13.271996  679823 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:13.272212  679823 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:06:13.272521  679823 out.go:303] Setting JSON to false
	I0130 22:06:13.272613  679823 mustload.go:65] Loading cluster: no-preload-023824
	I0130 22:06:13.273057  679823 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:06:13.273161  679823 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/config.json ...
	I0130 22:06:13.273357  679823 mustload.go:65] Loading cluster: no-preload-023824
	I0130 22:06:13.273539  679823 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:06:13.273588  679823 stop.go:39] StopHost: no-preload-023824
	I0130 22:06:13.274049  679823 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:06:13.274115  679823 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:06:13.289636  679823 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44893
	I0130 22:06:13.290174  679823 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:06:13.290874  679823 main.go:141] libmachine: Using API Version  1
	I0130 22:06:13.290920  679823 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:06:13.291480  679823 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:06:13.294018  679823 out.go:177] * Stopping node "no-preload-023824"  ...
	I0130 22:06:13.295573  679823 main.go:141] libmachine: Stopping "no-preload-023824"...
	I0130 22:06:13.295596  679823 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:06:13.297499  679823 main.go:141] libmachine: (no-preload-023824) Calling .Stop
	I0130 22:06:13.302518  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 0/120
	I0130 22:06:14.303865  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 1/120
	I0130 22:06:15.305596  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 2/120
	I0130 22:06:16.307794  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 3/120
	I0130 22:06:17.309330  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 4/120
	I0130 22:06:18.311224  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 5/120
	I0130 22:06:19.312661  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 6/120
	I0130 22:06:20.314285  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 7/120
	I0130 22:06:21.316047  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 8/120
	I0130 22:06:22.317604  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 9/120
	I0130 22:06:23.319994  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 10/120
	I0130 22:06:24.321438  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 11/120
	I0130 22:06:25.322935  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 12/120
	I0130 22:06:26.324280  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 13/120
	I0130 22:06:27.325571  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 14/120
	I0130 22:06:28.327339  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 15/120
	I0130 22:06:29.328754  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 16/120
	I0130 22:06:30.330192  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 17/120
	I0130 22:06:31.331956  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 18/120
	I0130 22:06:32.333434  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 19/120
	I0130 22:06:33.334849  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 20/120
	I0130 22:06:34.336242  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 21/120
	I0130 22:06:35.337596  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 22/120
	I0130 22:06:36.338967  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 23/120
	I0130 22:06:37.340375  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 24/120
	I0130 22:06:38.342212  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 25/120
	I0130 22:06:39.343573  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 26/120
	I0130 22:06:40.344858  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 27/120
	I0130 22:06:41.346173  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 28/120
	I0130 22:06:42.348006  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 29/120
	I0130 22:06:43.350063  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 30/120
	I0130 22:06:44.351406  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 31/120
	I0130 22:06:45.352867  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 32/120
	I0130 22:06:46.354236  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 33/120
	I0130 22:06:47.355481  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 34/120
	I0130 22:06:48.357329  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 35/120
	I0130 22:06:49.358587  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 36/120
	I0130 22:06:50.359890  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 37/120
	I0130 22:06:51.361182  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 38/120
	I0130 22:06:52.362553  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 39/120
	I0130 22:06:53.364550  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 40/120
	I0130 22:06:54.365869  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 41/120
	I0130 22:06:55.367198  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 42/120
	I0130 22:06:56.368559  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 43/120
	I0130 22:06:57.369976  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 44/120
	I0130 22:06:58.371855  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 45/120
	I0130 22:06:59.373100  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 46/120
	I0130 22:07:00.374409  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 47/120
	I0130 22:07:01.375782  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 48/120
	I0130 22:07:02.377263  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 49/120
	I0130 22:07:03.378552  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 50/120
	I0130 22:07:04.379886  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 51/120
	I0130 22:07:05.381153  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 52/120
	I0130 22:07:06.382442  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 53/120
	I0130 22:07:07.383666  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 54/120
	I0130 22:07:08.385659  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 55/120
	I0130 22:07:09.387801  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 56/120
	I0130 22:07:10.389159  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 57/120
	I0130 22:07:11.390479  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 58/120
	I0130 22:07:12.391913  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 59/120
	I0130 22:07:13.393943  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 60/120
	I0130 22:07:14.395418  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 61/120
	I0130 22:07:15.396854  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 62/120
	I0130 22:07:16.398046  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 63/120
	I0130 22:07:17.399470  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 64/120
	I0130 22:07:18.401439  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 65/120
	I0130 22:07:19.402589  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 66/120
	I0130 22:07:20.404045  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 67/120
	I0130 22:07:21.405223  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 68/120
	I0130 22:07:22.406674  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 69/120
	I0130 22:07:23.408849  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 70/120
	I0130 22:07:24.410163  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 71/120
	I0130 22:07:25.411573  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 72/120
	I0130 22:07:26.412872  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 73/120
	I0130 22:07:27.414313  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 74/120
	I0130 22:07:28.416438  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 75/120
	I0130 22:07:29.417754  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 76/120
	I0130 22:07:30.419188  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 77/120
	I0130 22:07:31.420436  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 78/120
	I0130 22:07:32.421966  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 79/120
	I0130 22:07:33.423487  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 80/120
	I0130 22:07:34.424785  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 81/120
	I0130 22:07:35.426035  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 82/120
	I0130 22:07:36.427364  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 83/120
	I0130 22:07:37.428691  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 84/120
	I0130 22:07:38.430672  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 85/120
	I0130 22:07:39.431969  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 86/120
	I0130 22:07:40.433216  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 87/120
	I0130 22:07:41.434473  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 88/120
	I0130 22:07:42.435955  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 89/120
	I0130 22:07:43.437794  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 90/120
	I0130 22:07:44.439872  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 91/120
	I0130 22:07:45.441425  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 92/120
	I0130 22:07:46.442717  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 93/120
	I0130 22:07:47.443938  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 94/120
	I0130 22:07:48.445825  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 95/120
	I0130 22:07:49.447166  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 96/120
	I0130 22:07:50.448643  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 97/120
	I0130 22:07:51.449832  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 98/120
	I0130 22:07:52.451115  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 99/120
	I0130 22:07:53.452556  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 100/120
	I0130 22:07:54.453950  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 101/120
	I0130 22:07:55.455348  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 102/120
	I0130 22:07:56.456689  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 103/120
	I0130 22:07:57.458199  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 104/120
	I0130 22:07:58.460120  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 105/120
	I0130 22:07:59.461561  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 106/120
	I0130 22:08:00.462947  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 107/120
	I0130 22:08:01.464393  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 108/120
	I0130 22:08:02.465822  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 109/120
	I0130 22:08:03.467766  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 110/120
	I0130 22:08:04.468993  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 111/120
	I0130 22:08:05.470427  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 112/120
	I0130 22:08:06.471828  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 113/120
	I0130 22:08:07.473148  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 114/120
	I0130 22:08:08.475211  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 115/120
	I0130 22:08:09.476625  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 116/120
	I0130 22:08:10.477882  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 117/120
	I0130 22:08:11.479269  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 118/120
	I0130 22:08:12.480684  679823 main.go:141] libmachine: (no-preload-023824) Waiting for machine to stop 119/120
	I0130 22:08:13.481610  679823 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 22:08:13.481678  679823 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 22:08:13.483717  679823 out.go:177] 
	W0130 22:08:13.484922  679823 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 22:08:13.484936  679823 out.go:239] * 
	* 
	W0130 22:08:13.488315  679823 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 22:08:13.489634  679823 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-023824 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824: exit status 3 (18.614544599s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:32.105853  680404 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host
	E0130 22:08:32.105880  680404 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-023824" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (138.90s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (138.87s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-713938 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-713938 --alsologtostderr -v=3: exit status 82 (2m0.262617014s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-713938"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 22:06:14.321353  679863 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:06:14.321497  679863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:14.321508  679863 out.go:309] Setting ErrFile to fd 2...
	I0130 22:06:14.321515  679863 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:14.321691  679863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:06:14.321971  679863 out.go:303] Setting JSON to false
	I0130 22:06:14.322056  679863 mustload.go:65] Loading cluster: embed-certs-713938
	I0130 22:06:14.322402  679863 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:06:14.322469  679863 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/config.json ...
	I0130 22:06:14.322629  679863 mustload.go:65] Loading cluster: embed-certs-713938
	I0130 22:06:14.322728  679863 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:06:14.322752  679863 stop.go:39] StopHost: embed-certs-713938
	I0130 22:06:14.323192  679863 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:06:14.323245  679863 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:06:14.339453  679863 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35629
	I0130 22:06:14.339980  679863 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:06:14.340741  679863 main.go:141] libmachine: Using API Version  1
	I0130 22:06:14.340770  679863 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:06:14.341261  679863 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:06:14.344043  679863 out.go:177] * Stopping node "embed-certs-713938"  ...
	I0130 22:06:14.345535  679863 main.go:141] libmachine: Stopping "embed-certs-713938"...
	I0130 22:06:14.345568  679863 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:06:14.347224  679863 main.go:141] libmachine: (embed-certs-713938) Calling .Stop
	I0130 22:06:14.351025  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 0/120
	I0130 22:06:15.352601  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 1/120
	I0130 22:06:16.353837  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 2/120
	I0130 22:06:17.355069  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 3/120
	I0130 22:06:18.356188  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 4/120
	I0130 22:06:19.358363  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 5/120
	I0130 22:06:20.360250  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 6/120
	I0130 22:06:21.361286  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 7/120
	I0130 22:06:22.362478  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 8/120
	I0130 22:06:23.363467  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 9/120
	I0130 22:06:24.365136  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 10/120
	I0130 22:06:25.366654  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 11/120
	I0130 22:06:26.367953  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 12/120
	I0130 22:06:27.369145  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 13/120
	I0130 22:06:28.370274  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 14/120
	I0130 22:06:29.372762  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 15/120
	I0130 22:06:30.374049  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 16/120
	I0130 22:06:31.375813  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 17/120
	I0130 22:06:32.377029  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 18/120
	I0130 22:06:33.378407  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 19/120
	I0130 22:06:34.380548  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 20/120
	I0130 22:06:35.381810  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 21/120
	I0130 22:06:36.383918  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 22/120
	I0130 22:06:37.385306  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 23/120
	I0130 22:06:38.386442  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 24/120
	I0130 22:06:39.388103  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 25/120
	I0130 22:06:40.389316  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 26/120
	I0130 22:06:41.390365  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 27/120
	I0130 22:06:42.391434  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 28/120
	I0130 22:06:43.392494  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 29/120
	I0130 22:06:44.394466  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 30/120
	I0130 22:06:45.395551  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 31/120
	I0130 22:06:46.396789  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 32/120
	I0130 22:06:47.397945  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 33/120
	I0130 22:06:48.399068  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 34/120
	I0130 22:06:49.400822  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 35/120
	I0130 22:06:50.402039  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 36/120
	I0130 22:06:51.403740  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 37/120
	I0130 22:06:52.404823  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 38/120
	I0130 22:06:53.406016  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 39/120
	I0130 22:06:54.407974  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 40/120
	I0130 22:06:55.409045  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 41/120
	I0130 22:06:56.410198  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 42/120
	I0130 22:06:57.411281  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 43/120
	I0130 22:06:58.412467  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 44/120
	I0130 22:06:59.414276  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 45/120
	I0130 22:07:00.415431  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 46/120
	I0130 22:07:01.416577  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 47/120
	I0130 22:07:02.417800  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 48/120
	I0130 22:07:03.418876  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 49/120
	I0130 22:07:04.420738  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 50/120
	I0130 22:07:05.421824  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 51/120
	I0130 22:07:06.423786  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 52/120
	I0130 22:07:07.425090  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 53/120
	I0130 22:07:08.426409  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 54/120
	I0130 22:07:09.428162  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 55/120
	I0130 22:07:10.429415  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 56/120
	I0130 22:07:11.430517  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 57/120
	I0130 22:07:12.431766  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 58/120
	I0130 22:07:13.432937  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 59/120
	I0130 22:07:14.434948  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 60/120
	I0130 22:07:15.436210  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 61/120
	I0130 22:07:16.437359  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 62/120
	I0130 22:07:17.438593  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 63/120
	I0130 22:07:18.439693  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 64/120
	I0130 22:07:19.441568  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 65/120
	I0130 22:07:20.443001  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 66/120
	I0130 22:07:21.444148  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 67/120
	I0130 22:07:22.445588  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 68/120
	I0130 22:07:23.446811  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 69/120
	I0130 22:07:24.448434  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 70/120
	I0130 22:07:25.449507  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 71/120
	I0130 22:07:26.450668  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 72/120
	I0130 22:07:27.451756  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 73/120
	I0130 22:07:28.453037  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 74/120
	I0130 22:07:29.454717  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 75/120
	I0130 22:07:30.455818  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 76/120
	I0130 22:07:31.456900  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 77/120
	I0130 22:07:32.458095  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 78/120
	I0130 22:07:33.459132  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 79/120
	I0130 22:07:34.461036  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 80/120
	I0130 22:07:35.462324  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 81/120
	I0130 22:07:36.463487  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 82/120
	I0130 22:07:37.464827  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 83/120
	I0130 22:07:38.465961  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 84/120
	I0130 22:07:39.467532  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 85/120
	I0130 22:07:40.468568  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 86/120
	I0130 22:07:41.469848  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 87/120
	I0130 22:07:42.471002  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 88/120
	I0130 22:07:43.472521  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 89/120
	I0130 22:07:44.474437  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 90/120
	I0130 22:07:45.475702  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 91/120
	I0130 22:07:46.476738  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 92/120
	I0130 22:07:47.478070  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 93/120
	I0130 22:07:48.479251  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 94/120
	I0130 22:07:49.480997  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 95/120
	I0130 22:07:50.482170  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 96/120
	I0130 22:07:51.483314  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 97/120
	I0130 22:07:52.484538  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 98/120
	I0130 22:07:53.485946  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 99/120
	I0130 22:07:54.487843  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 100/120
	I0130 22:07:55.488942  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 101/120
	I0130 22:07:56.490084  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 102/120
	I0130 22:07:57.491186  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 103/120
	I0130 22:07:58.492548  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 104/120
	I0130 22:07:59.494205  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 105/120
	I0130 22:08:00.495255  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 106/120
	I0130 22:08:01.496448  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 107/120
	I0130 22:08:02.497662  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 108/120
	I0130 22:08:03.498722  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 109/120
	I0130 22:08:04.500496  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 110/120
	I0130 22:08:05.501778  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 111/120
	I0130 22:08:06.502912  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 112/120
	I0130 22:08:07.504072  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 113/120
	I0130 22:08:08.505164  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 114/120
	I0130 22:08:09.506658  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 115/120
	I0130 22:08:10.507991  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 116/120
	I0130 22:08:11.509135  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 117/120
	I0130 22:08:12.510230  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 118/120
	I0130 22:08:13.512418  679863 main.go:141] libmachine: (embed-certs-713938) Waiting for machine to stop 119/120
	I0130 22:08:14.513651  679863 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 22:08:14.513728  679863 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 22:08:14.515766  679863 out.go:177] 
	W0130 22:08:14.517162  679863 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 22:08:14.517186  679863 out.go:239] * 
	* 
	W0130 22:08:14.520591  679863 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 22:08:14.522122  679863 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-713938 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938: exit status 3 (18.605505646s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:33.129880  680434 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host
	E0130 22:08:33.129902  680434 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-713938" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (138.87s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-850803 --alsologtostderr -v=3
E0130 22:06:52.587479  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-850803 --alsologtostderr -v=3: exit status 82 (2m0.272460972s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-850803"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 22:06:37.530204  680025 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:06:37.530341  680025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:37.530350  680025 out.go:309] Setting ErrFile to fd 2...
	I0130 22:06:37.530355  680025 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:06:37.530591  680025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:06:37.530909  680025 out.go:303] Setting JSON to false
	I0130 22:06:37.530990  680025 mustload.go:65] Loading cluster: default-k8s-diff-port-850803
	I0130 22:06:37.531317  680025 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:06:37.531391  680025 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:06:37.531545  680025 mustload.go:65] Loading cluster: default-k8s-diff-port-850803
	I0130 22:06:37.531653  680025 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:06:37.531677  680025 stop.go:39] StopHost: default-k8s-diff-port-850803
	I0130 22:06:37.532177  680025 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:06:37.532224  680025 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:06:37.546319  680025 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43899
	I0130 22:06:37.546762  680025 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:06:37.547304  680025 main.go:141] libmachine: Using API Version  1
	I0130 22:06:37.547326  680025 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:06:37.547668  680025 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:06:37.550233  680025 out.go:177] * Stopping node "default-k8s-diff-port-850803"  ...
	I0130 22:06:37.551619  680025 main.go:141] libmachine: Stopping "default-k8s-diff-port-850803"...
	I0130 22:06:37.551635  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:06:37.553098  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Stop
	I0130 22:06:37.556176  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 0/120
	I0130 22:06:38.557608  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 1/120
	I0130 22:06:39.558723  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 2/120
	I0130 22:06:40.560103  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 3/120
	I0130 22:06:41.561258  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 4/120
	I0130 22:06:42.563033  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 5/120
	I0130 22:06:43.564190  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 6/120
	I0130 22:06:44.565459  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 7/120
	I0130 22:06:45.566896  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 8/120
	I0130 22:06:46.568264  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 9/120
	I0130 22:06:47.570318  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 10/120
	I0130 22:06:48.571595  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 11/120
	I0130 22:06:49.572801  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 12/120
	I0130 22:06:50.573993  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 13/120
	I0130 22:06:51.575326  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 14/120
	I0130 22:06:52.577065  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 15/120
	I0130 22:06:53.578509  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 16/120
	I0130 22:06:54.579753  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 17/120
	I0130 22:06:55.581038  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 18/120
	I0130 22:06:56.582216  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 19/120
	I0130 22:06:57.584342  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 20/120
	I0130 22:06:58.585793  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 21/120
	I0130 22:06:59.587028  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 22/120
	I0130 22:07:00.588405  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 23/120
	I0130 22:07:01.589777  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 24/120
	I0130 22:07:02.591439  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 25/120
	I0130 22:07:03.592701  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 26/120
	I0130 22:07:04.593842  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 27/120
	I0130 22:07:05.595180  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 28/120
	I0130 22:07:06.596402  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 29/120
	I0130 22:07:07.597980  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 30/120
	I0130 22:07:08.599139  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 31/120
	I0130 22:07:09.600277  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 32/120
	I0130 22:07:10.601551  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 33/120
	I0130 22:07:11.602717  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 34/120
	I0130 22:07:12.604337  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 35/120
	I0130 22:07:13.605676  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 36/120
	I0130 22:07:14.606841  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 37/120
	I0130 22:07:15.608285  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 38/120
	I0130 22:07:16.609557  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 39/120
	I0130 22:07:17.610842  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 40/120
	I0130 22:07:18.612190  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 41/120
	I0130 22:07:19.613350  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 42/120
	I0130 22:07:20.614749  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 43/120
	I0130 22:07:21.615927  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 44/120
	I0130 22:07:22.617713  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 45/120
	I0130 22:07:23.619000  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 46/120
	I0130 22:07:24.620143  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 47/120
	I0130 22:07:25.621666  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 48/120
	I0130 22:07:26.623713  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 49/120
	I0130 22:07:27.625638  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 50/120
	I0130 22:07:28.627195  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 51/120
	I0130 22:07:29.628643  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 52/120
	I0130 22:07:30.630150  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 53/120
	I0130 22:07:31.631395  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 54/120
	I0130 22:07:32.633157  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 55/120
	I0130 22:07:33.634655  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 56/120
	I0130 22:07:34.635878  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 57/120
	I0130 22:07:35.637579  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 58/120
	I0130 22:07:36.638786  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 59/120
	I0130 22:07:37.640894  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 60/120
	I0130 22:07:38.642169  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 61/120
	I0130 22:07:39.643685  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 62/120
	I0130 22:07:40.644962  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 63/120
	I0130 22:07:41.646317  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 64/120
	I0130 22:07:42.648144  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 65/120
	I0130 22:07:43.649494  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 66/120
	I0130 22:07:44.650830  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 67/120
	I0130 22:07:45.652308  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 68/120
	I0130 22:07:46.653554  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 69/120
	I0130 22:07:47.655425  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 70/120
	I0130 22:07:48.656778  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 71/120
	I0130 22:07:49.658071  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 72/120
	I0130 22:07:50.659374  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 73/120
	I0130 22:07:51.660752  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 74/120
	I0130 22:07:52.662636  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 75/120
	I0130 22:07:53.664196  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 76/120
	I0130 22:07:54.665491  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 77/120
	I0130 22:07:55.667038  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 78/120
	I0130 22:07:56.668348  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 79/120
	I0130 22:07:57.669604  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 80/120
	I0130 22:07:58.671204  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 81/120
	I0130 22:07:59.672508  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 82/120
	I0130 22:08:00.674145  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 83/120
	I0130 22:08:01.675557  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 84/120
	I0130 22:08:02.677848  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 85/120
	I0130 22:08:03.679038  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 86/120
	I0130 22:08:04.680602  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 87/120
	I0130 22:08:05.682251  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 88/120
	I0130 22:08:06.684117  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 89/120
	I0130 22:08:07.686423  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 90/120
	I0130 22:08:08.687780  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 91/120
	I0130 22:08:09.689262  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 92/120
	I0130 22:08:10.690648  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 93/120
	I0130 22:08:11.692023  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 94/120
	I0130 22:08:12.694045  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 95/120
	I0130 22:08:13.695463  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 96/120
	I0130 22:08:14.696637  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 97/120
	I0130 22:08:15.698192  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 98/120
	I0130 22:08:16.699727  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 99/120
	I0130 22:08:17.702121  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 100/120
	I0130 22:08:18.704114  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 101/120
	I0130 22:08:19.705422  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 102/120
	I0130 22:08:20.706883  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 103/120
	I0130 22:08:21.708448  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 104/120
	I0130 22:08:22.710548  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 105/120
	I0130 22:08:23.711882  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 106/120
	I0130 22:08:24.713308  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 107/120
	I0130 22:08:25.714914  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 108/120
	I0130 22:08:26.716363  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 109/120
	I0130 22:08:27.718412  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 110/120
	I0130 22:08:28.719555  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 111/120
	I0130 22:08:29.720961  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 112/120
	I0130 22:08:30.722416  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 113/120
	I0130 22:08:31.723710  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 114/120
	I0130 22:08:32.725780  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 115/120
	I0130 22:08:33.727185  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 116/120
	I0130 22:08:34.728516  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 117/120
	I0130 22:08:35.730028  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 118/120
	I0130 22:08:36.731971  680025 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for machine to stop 119/120
	I0130 22:08:37.733039  680025 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0130 22:08:37.733119  680025 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0130 22:08:37.735061  680025 out.go:177] 
	W0130 22:08:37.736482  680025 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0130 22:08:37.736507  680025 out.go:239] * 
	* 
	W0130 22:08:37.740032  680025 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0130 22:08:37.741356  680025 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-850803 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803: exit status 3 (18.681990521s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:56.425781  680674 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host
	E0130 22:08:56.425801  680674 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850803" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.96s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992: exit status 3 (3.171577607s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:13.261769  680345 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0130 22:08:13.261786  680345 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-912992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-912992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.14947763s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-912992 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992: exit status 3 (3.062542856s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:22.473848  680475 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host
	E0130 22:08:22.473887  680475 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.84:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-912992" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824: exit status 3 (3.16755867s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:35.273803  680569 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host
	E0130 22:08:35.273821  680569 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-023824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-023824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154551539s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-023824 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824: exit status 3 (3.06132606s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:44.489859  680715 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host
	E0130 22:08:44.489891  680715 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.232:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-023824" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938: exit status 3 (3.199572046s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:36.329769  680598 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host
	E0130 22:08:36.329792  680598 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-713938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-713938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.154714458s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-713938 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938: exit status 3 (3.061202254s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:45.545911  680745 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host
	E0130 22:08:45.545939  680745 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-713938" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803: exit status 3 (3.199836307s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:08:59.625776  680896 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host
	E0130 22:08:59.625794  680896 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153517546s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-850803 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803: exit status 3 (3.062133472s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0130 22:09:08.841891  680966 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host
	E0130 22:09:08.841917  680966 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.254:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-850803" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 22:16:52.587661  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-912992 -n old-k8s-version-912992
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:24:10.880468602 +0000 UTC m=+5022.281410719
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-912992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-912992 logs -n 25: (1.656758982s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:09:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:09:08.900187  681007 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:09:08.900447  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900456  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:09:08.900460  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900635  681007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:09:08.901158  681007 out.go:303] Setting JSON to false
	I0130 22:09:08.902121  681007 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10301,"bootTime":1706642248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:09:08.902185  681007 start.go:138] virtualization: kvm guest
	I0130 22:09:08.904443  681007 out.go:177] * [default-k8s-diff-port-850803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:09:08.905904  681007 notify.go:220] Checking for updates...
	I0130 22:09:08.905916  681007 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:09:08.907548  681007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:09:08.908959  681007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:09:08.910401  681007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:09:08.911766  681007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:09:08.913044  681007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:09:08.914682  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:09:08.915157  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.915201  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.929650  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0130 22:09:08.930098  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.930701  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.930721  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.931048  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.931239  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.931458  681007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:09:08.931745  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.931778  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.946395  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0130 22:09:08.946754  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.947305  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.947328  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.947686  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.947865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.982088  681007 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 22:09:08.983300  681007 start.go:298] selected driver: kvm2
	I0130 22:09:08.983312  681007 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.983408  681007 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:09:08.984088  681007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:08.984161  681007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:09:08.997808  681007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:09:08.998205  681007 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:09:08.998285  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:09:08.998305  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:09:08.998323  681007 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85080
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.998554  681007 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:09.000506  681007 out.go:177] * Starting control plane node default-k8s-diff-port-850803 in cluster default-k8s-diff-port-850803
	I0130 22:09:09.417791  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:09.001801  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:09:09.001832  681007 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:09:09.001844  681007 cache.go:56] Caching tarball of preloaded images
	I0130 22:09:09.001930  681007 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:09:09.001942  681007 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:09:09.002074  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:09:09.002279  681007 start.go:365] acquiring machines lock for default-k8s-diff-port-850803: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:09:15.497723  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:18.569709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:24.649709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:27.721682  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:33.801746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:36.873758  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:42.953715  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:46.025774  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:52.105752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:55.177803  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:01.257740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:04.329775  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:10.409748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:13.481709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:19.561742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:22.634236  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:28.713807  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:31.785746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:37.865734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:40.937754  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:47.017740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:50.089744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:56.169767  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:59.241735  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:05.321760  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:08.393763  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:14.473745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:17.545673  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:23.625780  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:26.697711  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:32.777688  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:35.849700  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:41.929752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:45.001744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:51.081733  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:54.153686  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:00.233749  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:03.305724  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:09.385748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:12.457710  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:18.537805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:21.609734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:27.689765  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:30.761718  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:36.841762  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:39.913805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:45.993742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:49.065753  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:55.145745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:58.217703  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.302231  680786 start.go:369] acquired machines lock for "no-preload-023824" in 4m22.656152529s
	I0130 22:13:07.302304  680786 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:07.302314  680786 fix.go:54] fixHost starting: 
	I0130 22:13:07.302790  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:07.302835  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:07.317987  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0130 22:13:07.318451  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:07.318943  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:13:07.318965  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:07.319340  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:07.319538  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:07.319679  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:13:07.321151  680786 fix.go:102] recreateIfNeeded on no-preload-023824: state=Stopped err=<nil>
	I0130 22:13:07.321173  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	W0130 22:13:07.321343  680786 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:07.322929  680786 out.go:177] * Restarting existing kvm2 VM for "no-preload-023824" ...
	I0130 22:13:04.297739  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.299984  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:07.300024  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:13:07.302029  680506 machine.go:91] provisioned docker machine in 4m44.646018806s
	I0130 22:13:07.302108  680506 fix.go:56] fixHost completed within 4m44.666279152s
	I0130 22:13:07.302116  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 4m44.666320503s
	W0130 22:13:07.302153  680506 start.go:694] error starting host: provision: host is not running
	W0130 22:13:07.302282  680506 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 22:13:07.302293  680506 start.go:709] Will try again in 5 seconds ...
	I0130 22:13:07.324101  680786 main.go:141] libmachine: (no-preload-023824) Calling .Start
	I0130 22:13:07.324252  680786 main.go:141] libmachine: (no-preload-023824) Ensuring networks are active...
	I0130 22:13:07.325034  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network default is active
	I0130 22:13:07.325415  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network mk-no-preload-023824 is active
	I0130 22:13:07.325804  680786 main.go:141] libmachine: (no-preload-023824) Getting domain xml...
	I0130 22:13:07.326696  680786 main.go:141] libmachine: (no-preload-023824) Creating domain...
	I0130 22:13:08.499216  680786 main.go:141] libmachine: (no-preload-023824) Waiting to get IP...
	I0130 22:13:08.500483  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.500933  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.501067  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.500931  681630 retry.go:31] will retry after 268.447444ms: waiting for machine to come up
	I0130 22:13:08.771705  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.772073  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.772101  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.772010  681630 retry.go:31] will retry after 235.233391ms: waiting for machine to come up
	I0130 22:13:09.008402  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.008795  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.008826  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.008757  681630 retry.go:31] will retry after 433.981592ms: waiting for machine to come up
	I0130 22:13:09.444576  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.444963  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.445001  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.444900  681630 retry.go:31] will retry after 518.108537ms: waiting for machine to come up
	I0130 22:13:12.306584  680506 start.go:365] acquiring machines lock for old-k8s-version-912992: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:13:09.964605  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.964956  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.964985  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.964919  681630 retry.go:31] will retry after 497.667085ms: waiting for machine to come up
	I0130 22:13:10.464522  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:10.464897  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:10.464930  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:10.464853  681630 retry.go:31] will retry after 918.136538ms: waiting for machine to come up
	I0130 22:13:11.384191  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:11.384665  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:11.384719  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:11.384630  681630 retry.go:31] will retry after 942.595537ms: waiting for machine to come up
	I0130 22:13:12.328976  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:12.329412  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:12.329438  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:12.329365  681630 retry.go:31] will retry after 1.080632129s: waiting for machine to come up
	I0130 22:13:13.411494  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:13.411880  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:13.411905  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:13.411830  681630 retry.go:31] will retry after 1.70851135s: waiting for machine to come up
	I0130 22:13:15.122731  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:15.123212  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:15.123244  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:15.123164  681630 retry.go:31] will retry after 1.890143577s: waiting for machine to come up
	I0130 22:13:17.016347  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:17.016789  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:17.016812  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:17.016745  681630 retry.go:31] will retry after 2.710901352s: waiting for machine to come up
	I0130 22:13:19.731235  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:19.731687  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:19.731717  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:19.731628  681630 retry.go:31] will retry after 3.494667363s: waiting for machine to come up
	I0130 22:13:23.227477  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:23.227894  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:23.227927  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:23.227844  681630 retry.go:31] will retry after 4.45900259s: waiting for machine to come up
	I0130 22:13:28.902379  680821 start.go:369] acquired machines lock for "embed-certs-713938" in 4m43.197815022s
	I0130 22:13:28.902454  680821 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:28.902466  680821 fix.go:54] fixHost starting: 
	I0130 22:13:28.902824  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:28.902863  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:28.922121  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0130 22:13:28.922554  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:28.923019  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:13:28.923040  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:28.923378  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:28.923587  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:28.923730  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:13:28.925000  680821 fix.go:102] recreateIfNeeded on embed-certs-713938: state=Stopped err=<nil>
	I0130 22:13:28.925042  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	W0130 22:13:28.925225  680821 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:28.927620  680821 out.go:177] * Restarting existing kvm2 VM for "embed-certs-713938" ...
	I0130 22:13:27.688611  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689047  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has current primary IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689078  680786 main.go:141] libmachine: (no-preload-023824) Found IP for machine: 192.168.61.232
	I0130 22:13:27.689095  680786 main.go:141] libmachine: (no-preload-023824) Reserving static IP address...
	I0130 22:13:27.689540  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.689585  680786 main.go:141] libmachine: (no-preload-023824) DBG | skip adding static IP to network mk-no-preload-023824 - found existing host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"}
	I0130 22:13:27.689610  680786 main.go:141] libmachine: (no-preload-023824) Reserved static IP address: 192.168.61.232
	I0130 22:13:27.689630  680786 main.go:141] libmachine: (no-preload-023824) Waiting for SSH to be available...
	I0130 22:13:27.689645  680786 main.go:141] libmachine: (no-preload-023824) DBG | Getting to WaitForSSH function...
	I0130 22:13:27.691725  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692037  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.692060  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692196  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH client type: external
	I0130 22:13:27.692236  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa (-rw-------)
	I0130 22:13:27.692288  680786 main.go:141] libmachine: (no-preload-023824) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:27.692305  680786 main.go:141] libmachine: (no-preload-023824) DBG | About to run SSH command:
	I0130 22:13:27.692318  680786 main.go:141] libmachine: (no-preload-023824) DBG | exit 0
	I0130 22:13:27.784900  680786 main.go:141] libmachine: (no-preload-023824) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:27.785232  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetConfigRaw
	I0130 22:13:27.786142  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:27.788581  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.788961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.788997  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.789280  680786 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/config.json ...
	I0130 22:13:27.789457  680786 machine.go:88] provisioning docker machine ...
	I0130 22:13:27.789489  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:27.789691  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.789857  680786 buildroot.go:166] provisioning hostname "no-preload-023824"
	I0130 22:13:27.789879  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.790013  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.792055  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792370  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.792405  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792478  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.792643  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.792790  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.793010  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.793205  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.793814  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.793842  680786 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-023824 && echo "no-preload-023824" | sudo tee /etc/hostname
	I0130 22:13:27.931141  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-023824
	
	I0130 22:13:27.931176  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.933882  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934242  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.934277  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934403  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.934588  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934748  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934917  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.935106  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.935413  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.935438  680786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-023824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-023824/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-023824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:28.067312  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:28.067345  680786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:28.067368  680786 buildroot.go:174] setting up certificates
	I0130 22:13:28.067380  680786 provision.go:83] configureAuth start
	I0130 22:13:28.067389  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:28.067687  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.070381  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070751  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.070787  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070891  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.073317  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073672  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.073704  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073925  680786 provision.go:138] copyHostCerts
	I0130 22:13:28.074050  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:28.074092  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:28.074186  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:28.074311  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:28.074330  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:28.074381  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:28.074474  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:28.074485  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:28.074527  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:28.074604  680786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.no-preload-023824 san=[192.168.61.232 192.168.61.232 localhost 127.0.0.1 minikube no-preload-023824]
	I0130 22:13:28.175428  680786 provision.go:172] copyRemoteCerts
	I0130 22:13:28.175531  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:28.175566  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.178015  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178376  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.178416  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178540  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.178705  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.178860  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.179029  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.265687  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:28.287768  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:28.309363  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:28.331204  680786 provision.go:86] duration metric: configureAuth took 263.811459ms
	I0130 22:13:28.331232  680786 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:28.331476  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:13:28.331568  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.333837  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334205  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.334243  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334421  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.334626  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334804  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334978  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.335183  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.335552  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.335569  680786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:28.648182  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:28.648214  680786 machine.go:91] provisioned docker machine in 858.733436ms
	I0130 22:13:28.648228  680786 start.go:300] post-start starting for "no-preload-023824" (driver="kvm2")
	I0130 22:13:28.648254  680786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:28.648272  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.648633  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:28.648669  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.651616  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.651990  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.652019  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.652200  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.652427  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.652589  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.652737  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.742644  680786 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:28.746791  680786 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:28.746818  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:28.746949  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:28.747065  680786 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:28.747165  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:28.755371  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:28.776917  680786 start.go:303] post-start completed in 128.667778ms
	I0130 22:13:28.776944  680786 fix.go:56] fixHost completed within 21.474623735s
	I0130 22:13:28.776969  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.779261  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779562  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.779591  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779715  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.779938  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780109  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780291  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.780465  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.780778  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.780790  680786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:28.902234  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652808.852489807
	
	I0130 22:13:28.902258  680786 fix.go:206] guest clock: 1706652808.852489807
	I0130 22:13:28.902265  680786 fix.go:219] Guest: 2024-01-30 22:13:28.852489807 +0000 UTC Remote: 2024-01-30 22:13:28.776948754 +0000 UTC m=+284.278530089 (delta=75.541053ms)
	I0130 22:13:28.902285  680786 fix.go:190] guest clock delta is within tolerance: 75.541053ms
	I0130 22:13:28.902291  680786 start.go:83] releasing machines lock for "no-preload-023824", held for 21.600013123s
	I0130 22:13:28.902314  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.902603  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.905058  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905455  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.905516  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905584  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906376  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906578  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906653  680786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:28.906711  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.906863  680786 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:28.906902  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.909484  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909525  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909824  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909856  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909886  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909902  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909952  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910141  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910150  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910347  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910350  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.910620  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:29.028948  680786 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:29.034774  680786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:29.182970  680786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:29.190306  680786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:29.190375  680786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:29.205114  680786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:29.205135  680786 start.go:475] detecting cgroup driver to use...
	I0130 22:13:29.205195  680786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:29.220998  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:29.234283  680786 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:29.234332  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:29.246205  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:29.258169  680786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:29.366756  680786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:29.499821  680786 docker.go:233] disabling docker service ...
	I0130 22:13:29.499908  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:29.513281  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:29.526823  680786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:29.644395  680786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:29.756912  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:29.768811  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:29.785830  680786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:29.785897  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.794702  680786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:29.794755  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.803342  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.812148  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.820802  680786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:29.830052  680786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:29.838334  680786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:29.838402  680786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:29.849789  680786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:29.858298  680786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:29.968180  680786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:30.134232  680786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:30.134309  680786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:30.139054  680786 start.go:543] Will wait 60s for crictl version
	I0130 22:13:30.139130  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.142760  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:30.183071  680786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:30.183175  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.225981  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.276982  680786 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 22:13:28.928924  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Start
	I0130 22:13:28.929139  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring networks are active...
	I0130 22:13:28.929766  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network default is active
	I0130 22:13:28.930145  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network mk-embed-certs-713938 is active
	I0130 22:13:28.930485  680821 main.go:141] libmachine: (embed-certs-713938) Getting domain xml...
	I0130 22:13:28.931095  680821 main.go:141] libmachine: (embed-certs-713938) Creating domain...
	I0130 22:13:30.162733  680821 main.go:141] libmachine: (embed-certs-713938) Waiting to get IP...
	I0130 22:13:30.163807  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.164261  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.164352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.164238  681759 retry.go:31] will retry after 217.071442ms: waiting for machine to come up
	I0130 22:13:30.382542  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.382918  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.382952  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.382899  681759 retry.go:31] will retry after 372.773352ms: waiting for machine to come up
	I0130 22:13:30.278407  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:30.281307  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281730  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:30.281762  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281947  680786 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:30.285873  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:30.299947  680786 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:13:30.300015  680786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:30.342071  680786 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 22:13:30.342094  680786 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:13:30.342198  680786 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.342218  680786 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.342257  680786 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.342278  680786 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.342288  680786 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.342205  680786 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.342265  680786 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 22:13:30.342563  680786 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343800  680786 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 22:13:30.343838  680786 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.343804  680786 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343805  680786 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.343809  680786 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.343801  680786 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.514364  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 22:13:30.529476  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.537822  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.540358  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.546677  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.559021  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.559189  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.579664  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.721137  680786 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 22:13:30.721228  680786 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.721280  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.745682  680786 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 22:13:30.745742  680786 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.745796  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750720  680786 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 22:13:30.750770  680786 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.750821  680786 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 22:13:30.750841  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750854  680786 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.750897  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768135  680786 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 22:13:30.768182  680786 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.768199  680786 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 22:13:30.768243  680786 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.768289  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768303  680786 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 22:13:30.768246  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768384  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.768329  680786 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.768499  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.768527  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.785074  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.785548  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.895706  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.895775  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.895925  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.910469  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910496  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910549  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 22:13:30.910578  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910584  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 22:13:30.910580  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910664  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.910628  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:30.928331  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 22:13:30.928431  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:30.958095  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958123  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958140  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 22:13:30.958176  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958205  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958178  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958249  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 22:13:30.958182  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958271  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958290  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 22:13:33.833277  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.87499883s)
	I0130 22:13:33.833318  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 22:13:33.833336  680786 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.875036585s)
	I0130 22:13:33.833372  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 22:13:33.833366  680786 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:33.833461  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.757262  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.757819  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.757870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.757738  681759 retry.go:31] will retry after 414.437055ms: waiting for machine to come up
	I0130 22:13:31.174434  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.174883  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.174936  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.174831  681759 retry.go:31] will retry after 555.308421ms: waiting for machine to come up
	I0130 22:13:31.731536  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.732150  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.732188  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.732111  681759 retry.go:31] will retry after 484.945442ms: waiting for machine to come up
	I0130 22:13:32.218554  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:32.218989  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:32.219024  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:32.218934  681759 retry.go:31] will retry after 802.660361ms: waiting for machine to come up
	I0130 22:13:33.022920  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:33.023362  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:33.023397  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:33.023298  681759 retry.go:31] will retry after 990.694559ms: waiting for machine to come up
	I0130 22:13:34.015896  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:34.016379  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:34.016407  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:34.016345  681759 retry.go:31] will retry after 1.382435075s: waiting for machine to come up
	I0130 22:13:35.400870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:35.401294  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:35.401327  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:35.401233  681759 retry.go:31] will retry after 1.53975085s: waiting for machine to come up
	I0130 22:13:37.909186  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075686172s)
	I0130 22:13:37.909214  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 22:13:37.909257  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:37.909303  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:39.052225  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.142886078s)
	I0130 22:13:39.052285  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 22:13:39.052326  680786 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:39.052412  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:36.942944  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:36.943539  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:36.943580  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:36.943478  681759 retry.go:31] will retry after 1.888978312s: waiting for machine to come up
	I0130 22:13:38.834886  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:38.835467  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:38.835508  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:38.835393  681759 retry.go:31] will retry after 1.774102713s: waiting for machine to come up
	I0130 22:13:41.133330  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080888409s)
	I0130 22:13:41.133358  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 22:13:41.133383  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:41.133432  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:43.814683  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.681223745s)
	I0130 22:13:43.814716  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 22:13:43.814742  680786 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:43.814779  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:40.611628  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:40.612048  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:40.612083  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:40.611995  681759 retry.go:31] will retry after 2.428322726s: waiting for machine to come up
	I0130 22:13:43.041506  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:43.041916  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:43.041950  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:43.041859  681759 retry.go:31] will retry after 4.531865882s: waiting for machine to come up
	I0130 22:13:48.690103  681007 start.go:369] acquired machines lock for "default-k8s-diff-port-850803" in 4m39.687788229s
	I0130 22:13:48.690177  681007 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:48.690188  681007 fix.go:54] fixHost starting: 
	I0130 22:13:48.690569  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:48.690606  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:48.709730  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0130 22:13:48.710142  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:48.710684  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:13:48.710714  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:48.711070  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:48.711280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:13:48.711446  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:13:48.712865  681007 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850803: state=Stopped err=<nil>
	I0130 22:13:48.712909  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	W0130 22:13:48.713065  681007 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:48.716450  681007 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850803" ...
	I0130 22:13:48.717867  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Start
	I0130 22:13:48.718031  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring networks are active...
	I0130 22:13:48.718700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network default is active
	I0130 22:13:48.719030  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network mk-default-k8s-diff-port-850803 is active
	I0130 22:13:48.719391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Getting domain xml...
	I0130 22:13:48.720046  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Creating domain...
	I0130 22:13:44.761511  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 22:13:44.761571  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:44.761627  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:46.718526  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.956864919s)
	I0130 22:13:46.718569  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 22:13:46.718605  680786 cache_images.go:123] Successfully loaded all cached images
	I0130 22:13:46.718612  680786 cache_images.go:92] LoadImages completed in 16.376507144s
	I0130 22:13:46.718742  680786 ssh_runner.go:195] Run: crio config
	I0130 22:13:46.782286  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:13:46.782311  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:46.782332  680786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:46.782372  680786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-023824 NodeName:no-preload-023824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:46.782544  680786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-023824"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:46.782617  680786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-023824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:46.782674  680786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 22:13:46.792236  680786 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:46.792309  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:46.800361  680786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 22:13:46.816070  680786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 22:13:46.830820  680786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 22:13:46.846493  680786 ssh_runner.go:195] Run: grep 192.168.61.232	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:46.849883  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:46.861414  680786 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824 for IP: 192.168.61.232
	I0130 22:13:46.861442  680786 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:46.861617  680786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:46.861664  680786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:46.861767  680786 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.key
	I0130 22:13:46.861831  680786 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key.e2a9f73e
	I0130 22:13:46.861872  680786 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key
	I0130 22:13:46.862006  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:46.862040  680786 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:46.862051  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:46.862074  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:46.862095  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:46.862118  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:46.862163  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:46.863014  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:46.887626  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:13:46.910152  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:46.931711  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:46.953156  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:46.974390  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:46.996094  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:47.017226  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:47.038317  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:47.059119  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:47.080077  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:47.101123  680786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:47.116152  680786 ssh_runner.go:195] Run: openssl version
	I0130 22:13:47.121529  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:47.130166  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134329  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134391  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.139537  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:47.148157  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:47.156558  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160623  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160682  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.165652  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:47.174350  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:47.183169  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187220  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187245  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.192369  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:47.201432  680786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:47.205518  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:47.210821  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:47.216074  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:47.221255  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:47.226609  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:47.231891  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:47.237220  680786 kubeadm.go:404] StartCluster: {Name:no-preload-023824 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:47.237355  680786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:47.237395  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:47.277488  680786 cri.go:89] found id: ""
	I0130 22:13:47.277561  680786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:47.286193  680786 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:47.286220  680786 kubeadm.go:636] restartCluster start
	I0130 22:13:47.286276  680786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:47.294206  680786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.295888  680786 kubeconfig.go:92] found "no-preload-023824" server: "https://192.168.61.232:8443"
	I0130 22:13:47.299852  680786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:47.307350  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.307401  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.317985  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.808078  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.808141  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.819689  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.308177  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.308241  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.319138  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.808388  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.808448  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.819501  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:49.308165  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.308254  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.319364  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.577701  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578126  680821 main.go:141] libmachine: (embed-certs-713938) Found IP for machine: 192.168.72.213
	I0130 22:13:47.578150  680821 main.go:141] libmachine: (embed-certs-713938) Reserving static IP address...
	I0130 22:13:47.578166  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has current primary IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578564  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.578605  680821 main.go:141] libmachine: (embed-certs-713938) DBG | skip adding static IP to network mk-embed-certs-713938 - found existing host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"}
	I0130 22:13:47.578616  680821 main.go:141] libmachine: (embed-certs-713938) Reserved static IP address: 192.168.72.213
	I0130 22:13:47.578630  680821 main.go:141] libmachine: (embed-certs-713938) Waiting for SSH to be available...
	I0130 22:13:47.578646  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Getting to WaitForSSH function...
	I0130 22:13:47.580757  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581084  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.581120  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581221  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH client type: external
	I0130 22:13:47.581282  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa (-rw-------)
	I0130 22:13:47.581324  680821 main.go:141] libmachine: (embed-certs-713938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:47.581344  680821 main.go:141] libmachine: (embed-certs-713938) DBG | About to run SSH command:
	I0130 22:13:47.581357  680821 main.go:141] libmachine: (embed-certs-713938) DBG | exit 0
	I0130 22:13:47.669006  680821 main.go:141] libmachine: (embed-certs-713938) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:47.669397  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetConfigRaw
	I0130 22:13:47.670084  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.672437  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.672782  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.672806  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.673048  680821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/config.json ...
	I0130 22:13:47.673225  680821 machine.go:88] provisioning docker machine ...
	I0130 22:13:47.673243  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:47.673432  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673608  680821 buildroot.go:166] provisioning hostname "embed-certs-713938"
	I0130 22:13:47.673628  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673766  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.675747  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676016  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.676043  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676178  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.676351  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676484  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676618  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.676743  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.677070  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.677083  680821 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-713938 && echo "embed-certs-713938" | sudo tee /etc/hostname
	I0130 22:13:47.800976  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-713938
	
	I0130 22:13:47.801011  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.803566  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.803876  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.803901  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.804047  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.804235  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804417  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.804699  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.805016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.805033  680821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-713938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-713938/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-713938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:47.928846  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:47.928882  680821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:47.928908  680821 buildroot.go:174] setting up certificates
	I0130 22:13:47.928956  680821 provision.go:83] configureAuth start
	I0130 22:13:47.928976  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.929283  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.931756  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932014  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.932045  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932206  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.934351  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934647  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.934670  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934814  680821 provision.go:138] copyHostCerts
	I0130 22:13:47.934875  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:47.934889  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:47.934963  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:47.935072  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:47.935087  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:47.935120  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:47.935196  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:47.935206  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:47.935234  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:47.935349  680821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.embed-certs-713938 san=[192.168.72.213 192.168.72.213 localhost 127.0.0.1 minikube embed-certs-713938]
	I0130 22:13:47.995543  680821 provision.go:172] copyRemoteCerts
	I0130 22:13:47.995624  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:47.995659  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.998113  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998409  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.998436  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998636  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.998822  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.999004  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.999123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.086454  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:48.108713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:48.131124  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:48.153234  680821 provision.go:86] duration metric: configureAuth took 224.258095ms
	I0130 22:13:48.153269  680821 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:48.153447  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:13:48.153554  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.156268  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156673  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.156705  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156847  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.157070  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157294  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157481  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.157649  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.158119  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.158143  680821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:48.449095  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:48.449131  680821 machine.go:91] provisioned docker machine in 775.890813ms
	I0130 22:13:48.449146  680821 start.go:300] post-start starting for "embed-certs-713938" (driver="kvm2")
	I0130 22:13:48.449161  680821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:48.449185  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.449573  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:48.449605  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.452408  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.452831  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.452866  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.453009  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.453240  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.453416  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.453566  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.539764  680821 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:48.543876  680821 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:48.543905  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:48.543969  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:48.544045  680821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:48.544163  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:48.552947  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:48.573560  680821 start.go:303] post-start completed in 124.400867ms
	I0130 22:13:48.573588  680821 fix.go:56] fixHost completed within 19.671118722s
	I0130 22:13:48.573615  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.576352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576755  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.576777  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576965  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.577170  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577337  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.577708  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.578016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.578029  680821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:48.689910  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652828.640343702
	
	I0130 22:13:48.689937  680821 fix.go:206] guest clock: 1706652828.640343702
	I0130 22:13:48.689948  680821 fix.go:219] Guest: 2024-01-30 22:13:48.640343702 +0000 UTC Remote: 2024-01-30 22:13:48.573593176 +0000 UTC m=+303.018932163 (delta=66.750526ms)
	I0130 22:13:48.690012  680821 fix.go:190] guest clock delta is within tolerance: 66.750526ms
	I0130 22:13:48.690023  680821 start.go:83] releasing machines lock for "embed-certs-713938", held for 19.787596053s
	I0130 22:13:48.690062  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.690367  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:48.692836  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693147  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.693180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693372  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.693895  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694095  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694178  680821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:48.694232  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.694331  680821 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:48.694354  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.696786  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697137  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697205  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697357  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697529  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.697648  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697675  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697706  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.697830  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697910  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.697985  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.698143  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.698307  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.807627  680821 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:48.813332  680821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:48.953919  680821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:48.960672  680821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:48.960744  680821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:48.977684  680821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:48.977702  680821 start.go:475] detecting cgroup driver to use...
	I0130 22:13:48.977766  680821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:48.989811  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:49.001223  680821 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:49.001281  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:49.012649  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:49.024426  680821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:49.130220  680821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:49.248922  680821 docker.go:233] disabling docker service ...
	I0130 22:13:49.248999  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:49.262066  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:49.272736  680821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:49.394001  680821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:49.514043  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:49.526282  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:49.545253  680821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:49.545303  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.554715  680821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:49.554775  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.564248  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.573151  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.582148  680821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:49.591604  680821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:49.599683  680821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:49.599722  680821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:49.611807  680821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:49.622179  680821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:49.745824  680821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:49.924707  680821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:49.924788  680821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:49.930158  680821 start.go:543] Will wait 60s for crictl version
	I0130 22:13:49.930234  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:13:49.933971  680821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:49.973662  680821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:49.973736  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.018705  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.070907  680821 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:13:50.072352  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:50.075100  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075487  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:50.075519  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075750  680821 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:50.079538  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:50.093965  680821 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:13:50.094028  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:50.133425  680821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:13:50.133506  680821 ssh_runner.go:195] Run: which lz4
	I0130 22:13:50.137267  680821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:13:50.141273  680821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:13:50.141299  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:13:49.938197  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting to get IP...
	I0130 22:13:49.939301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939717  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939806  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:49.939711  681876 retry.go:31] will retry after 300.092754ms: waiting for machine to come up
	I0130 22:13:50.241301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241860  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241890  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.241804  681876 retry.go:31] will retry after 313.990905ms: waiting for machine to come up
	I0130 22:13:50.557661  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558161  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.558077  681876 retry.go:31] will retry after 484.197655ms: waiting for machine to come up
	I0130 22:13:51.043815  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044313  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044345  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.044255  681876 retry.go:31] will retry after 595.208415ms: waiting for machine to come up
	I0130 22:13:51.640765  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641244  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641281  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.641207  681876 retry.go:31] will retry after 646.272845ms: waiting for machine to come up
	I0130 22:13:52.288980  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289729  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:52.289599  681876 retry.go:31] will retry after 864.623353ms: waiting for machine to come up
	I0130 22:13:53.155328  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155826  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:53.155750  681876 retry.go:31] will retry after 943.126628ms: waiting for machine to come up
	I0130 22:13:49.807842  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.807941  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.826075  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.308394  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.308476  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.323858  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.807449  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.807538  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.823237  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.307590  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.307684  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.322999  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.807466  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.807551  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.822502  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.308300  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.308431  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.329435  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.808248  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.808379  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.823821  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.308375  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.308462  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.321178  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.807637  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.807748  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.823761  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:54.308223  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.308300  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.320791  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.023827  680821 crio.go:444] Took 1.886590 seconds to copy over tarball
	I0130 22:13:52.023892  680821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:13:55.116587  680821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092664003s)
	I0130 22:13:55.116614  680821 crio.go:451] Took 3.092762 seconds to extract the tarball
	I0130 22:13:55.116644  680821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:13:55.159215  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:55.210233  680821 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:13:55.210263  680821 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:13:55.210344  680821 ssh_runner.go:195] Run: crio config
	I0130 22:13:55.268468  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:13:55.268496  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:55.268519  680821 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:55.268545  680821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-713938 NodeName:embed-certs-713938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:55.268710  680821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-713938"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:55.268801  680821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-713938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:55.268880  680821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:13:55.278244  680821 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:55.278321  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:55.287034  680821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0130 22:13:55.302012  680821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:13:55.318716  680821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0130 22:13:55.335364  680821 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:55.338950  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:55.349780  680821 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938 for IP: 192.168.72.213
	I0130 22:13:55.349814  680821 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:55.350000  680821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:55.350058  680821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:55.350157  680821 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/client.key
	I0130 22:13:55.350242  680821 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key.0982f839
	I0130 22:13:55.350299  680821 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key
	I0130 22:13:55.350469  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:55.350520  680821 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:55.350539  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:55.350577  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:55.350612  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:55.350648  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:55.350707  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:55.351807  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:55.373160  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 22:13:55.394634  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:55.416281  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:55.438713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:55.460324  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:55.481480  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:55.502869  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:55.524520  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:55.547601  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:55.569483  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:55.590741  680821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:54.100347  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:54.100763  681876 retry.go:31] will retry after 1.412406258s: waiting for machine to come up
	I0130 22:13:55.514929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515302  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515362  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:55.515267  681876 retry.go:31] will retry after 1.440442596s: waiting for machine to come up
	I0130 22:13:56.957895  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958367  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958390  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:56.958326  681876 retry.go:31] will retry after 1.996277334s: waiting for machine to come up
	I0130 22:13:54.807936  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.808021  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.824410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.307845  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.307937  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.320645  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.808272  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.808384  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.820051  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.307482  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.307567  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.319410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.808044  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.808167  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.820440  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.308301  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.308409  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.323612  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.323650  680786 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:13:57.323715  680786 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:13:57.323733  680786 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:13:57.323798  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:57.364379  680786 cri.go:89] found id: ""
	I0130 22:13:57.364467  680786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:13:57.380175  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:13:57.390701  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:13:57.390770  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400039  680786 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400071  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:57.546658  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.567155  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020447474s)
	I0130 22:13:58.567192  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.794332  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.875254  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.943890  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:13:58.944000  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:59.444721  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:55.608619  680821 ssh_runner.go:195] Run: openssl version
	I0130 22:13:55.880188  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:55.890762  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895346  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895423  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.900872  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:55.911050  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:55.921117  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925362  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925410  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.930499  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:55.940167  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:55.950284  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954643  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954688  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.959830  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:55.969573  680821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:55.973654  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:55.980878  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:55.988262  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:55.995379  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:56.002387  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:56.007729  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:56.013164  680821 kubeadm.go:404] StartCluster: {Name:embed-certs-713938 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:56.013256  680821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:56.013290  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:56.054588  680821 cri.go:89] found id: ""
	I0130 22:13:56.054670  680821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:56.064691  680821 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:56.064720  680821 kubeadm.go:636] restartCluster start
	I0130 22:13:56.064781  680821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:56.074132  680821 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.075653  680821 kubeconfig.go:92] found "embed-certs-713938" server: "https://192.168.72.213:8443"
	I0130 22:13:56.078677  680821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:56.087919  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.087968  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.099213  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.588843  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.588940  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.601681  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.088185  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.088291  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.103229  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.588880  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.589012  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.604127  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.088751  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.088880  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.100833  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.588147  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.588264  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.604368  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.088571  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.088681  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.104028  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.588569  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.588684  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.602995  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.088596  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.088729  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.104195  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.588883  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.588987  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.605168  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.956101  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956568  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956598  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:58.956511  681876 retry.go:31] will retry after 2.859682959s: waiting for machine to come up
	I0130 22:14:01.819863  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820443  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820476  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:01.820388  681876 retry.go:31] will retry after 2.840054468s: waiting for machine to come up
	I0130 22:13:59.945172  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.444900  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.945042  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.444410  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.486688  680786 api_server.go:72] duration metric: took 2.54280014s to wait for apiserver process to appear ...
	I0130 22:14:01.486719  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:01.486775  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.487585  680786 api_server.go:269] stopped: https://192.168.61.232:8443/healthz: Get "https://192.168.61.232:8443/healthz": dial tcp 192.168.61.232:8443: connect: connection refused
	I0130 22:14:01.987279  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.088999  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.089091  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.104740  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:01.588046  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.588171  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.603186  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.088381  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.088495  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.104148  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.588728  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.588850  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.603782  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.088297  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.088396  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.101192  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.588856  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.588967  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.600516  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.088592  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.088688  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.101572  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.588042  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.588181  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.600890  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.088324  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.088437  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.103896  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.588678  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.588786  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.604329  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.974310  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:04.974343  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:04.974361  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.032790  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.032856  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.032882  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.052788  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.052811  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.487474  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.494053  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.494084  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:05.987783  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.994015  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.994049  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:06.487723  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:06.492959  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:14:06.500169  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:14:06.500208  680786 api_server.go:131] duration metric: took 5.013479999s to wait for apiserver health ...
	I0130 22:14:06.500221  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:14:06.500230  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:06.502253  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:04.661649  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.661976  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.662010  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:04.661932  681876 retry.go:31] will retry after 4.414855002s: waiting for machine to come up
	I0130 22:14:06.503764  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:06.514909  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:06.534344  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:06.546282  680786 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:06.546323  680786 system_pods.go:61] "coredns-76f75df574-cvjdk" [3f6526d5-7bf6-4d51-96bc-9dc6f70ead98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:06.546333  680786 system_pods.go:61] "etcd-no-preload-023824" [89ebff7a-3ac5-4aa7-aab7-9c61e59027a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:06.546352  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [bea4217d-ad4c-4945-ac59-1589976698e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:06.546369  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [4a1866ae-14ce-4132-bc99-225c518ab4bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:06.546394  680786 system_pods.go:61] "kube-proxy-phh5j" [3e662e91-7886-44e7-87a0-4a727011062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:06.546407  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [ad7a7f1c-6aa6-4e16-94d5-e5db7d3e39f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:06.546422  680786 system_pods.go:61] "metrics-server-57f55c9bc5-qfj5x" [13ae9773-8607-43ae-a122-4f97b367a954] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:06.546433  680786 system_pods.go:61] "storage-provisioner" [50dd4d19-5e05-47b7-a11f-5975bc6ef0e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:06.546445  680786 system_pods.go:74] duration metric: took 12.076118ms to wait for pod list to return data ...
	I0130 22:14:06.546458  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:06.549604  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:06.549634  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:06.549645  680786 node_conditions.go:105] duration metric: took 3.179552ms to run NodePressure ...
	I0130 22:14:06.549662  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.858172  680786 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863712  680786 kubeadm.go:787] kubelet initialised
	I0130 22:14:06.863731  680786 kubeadm.go:788] duration metric: took 5.530573ms waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863738  680786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:06.869540  680786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:08.886275  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:10.543927  680506 start.go:369] acquired machines lock for "old-k8s-version-912992" in 58.237287777s
	I0130 22:14:10.543984  680506 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:14:10.543993  680506 fix.go:54] fixHost starting: 
	I0130 22:14:10.544466  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:14:10.544494  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:14:10.563544  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0130 22:14:10.564063  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:14:10.564683  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:14:10.564705  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:14:10.565128  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:14:10.565338  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:10.565526  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:14:10.567290  680506 fix.go:102] recreateIfNeeded on old-k8s-version-912992: state=Stopped err=<nil>
	I0130 22:14:10.567314  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	W0130 22:14:10.567565  680506 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:14:10.569441  680506 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-912992" ...
	I0130 22:14:06.089016  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:06.089138  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:06.101226  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:06.101265  680821 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:06.101276  680821 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:06.101292  680821 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:06.101373  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:06.145816  680821 cri.go:89] found id: ""
	I0130 22:14:06.145935  680821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:06.162118  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:06.174308  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:06.174379  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186134  680821 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186164  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.312544  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.860323  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.068181  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.151741  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.236354  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:07.236461  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:07.737169  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.237398  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.737483  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.237152  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.736646  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.763936  680821 api_server.go:72] duration metric: took 2.527584407s to wait for apiserver process to appear ...
	I0130 22:14:09.763962  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:09.763991  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:09.078352  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078935  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Found IP for machine: 192.168.50.254
	I0130 22:14:09.078985  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has current primary IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078997  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserving static IP address...
	I0130 22:14:09.079366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.079391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | skip adding static IP to network mk-default-k8s-diff-port-850803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"}
	I0130 22:14:09.079411  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Getting to WaitForSSH function...
	I0130 22:14:09.079431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserved static IP address: 192.168.50.254
	I0130 22:14:09.079442  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for SSH to be available...
	I0130 22:14:09.082189  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082612  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.082638  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082892  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH client type: external
	I0130 22:14:09.082917  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa (-rw-------)
	I0130 22:14:09.082982  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:09.082996  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | About to run SSH command:
	I0130 22:14:09.083009  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | exit 0
	I0130 22:14:09.182746  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:09.183304  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetConfigRaw
	I0130 22:14:09.184088  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.187115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187576  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.187606  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187972  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:14:09.188234  681007 machine.go:88] provisioning docker machine ...
	I0130 22:14:09.188262  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:09.188470  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188648  681007 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850803"
	I0130 22:14:09.188670  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188822  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.191366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191769  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.191808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.192148  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192332  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192488  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.192732  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.193245  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.193273  681007 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850803 && echo "default-k8s-diff-port-850803" | sudo tee /etc/hostname
	I0130 22:14:09.344664  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850803
	
	I0130 22:14:09.344700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.348016  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348485  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.348516  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348685  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.348962  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.349505  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.349996  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.350025  681007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:09.490740  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:09.490778  681007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:09.490812  681007 buildroot.go:174] setting up certificates
	I0130 22:14:09.490825  681007 provision.go:83] configureAuth start
	I0130 22:14:09.490844  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.491225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.494577  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495040  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.495085  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495194  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.497931  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498407  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.498433  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498638  681007 provision.go:138] copyHostCerts
	I0130 22:14:09.498702  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:09.498717  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:09.498778  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:09.498898  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:09.498912  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:09.498955  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:09.499039  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:09.499052  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:09.499080  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:09.499147  681007 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850803 san=[192.168.50.254 192.168.50.254 localhost 127.0.0.1 minikube default-k8s-diff-port-850803]
	I0130 22:14:09.749739  681007 provision.go:172] copyRemoteCerts
	I0130 22:14:09.749810  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:09.749848  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.753032  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753498  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.753533  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753727  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.753945  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.754170  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.754364  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:09.851640  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:09.879906  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 22:14:09.907030  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:09.934916  681007 provision.go:86] duration metric: configureAuth took 444.054165ms
	I0130 22:14:09.934954  681007 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:09.935190  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:14:09.935324  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.938507  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.938854  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.938894  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.939068  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.939312  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939517  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.939899  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.940390  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.940421  681007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:10.275894  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:10.275935  681007 machine.go:91] provisioned docker machine in 1.087679661s
	I0130 22:14:10.275950  681007 start.go:300] post-start starting for "default-k8s-diff-port-850803" (driver="kvm2")
	I0130 22:14:10.275965  681007 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:10.275989  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.276387  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:10.276445  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.279676  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280069  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.280115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280364  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.280584  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.280766  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.280923  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.373204  681007 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:10.377609  681007 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:10.377637  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:10.377705  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:10.377773  681007 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:10.377857  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:10.388096  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:10.414529  681007 start.go:303] post-start completed in 138.561717ms
	I0130 22:14:10.414557  681007 fix.go:56] fixHost completed within 21.7243684s
	I0130 22:14:10.414586  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.417282  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417709  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.417741  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417872  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.418063  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418233  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418356  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.418555  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:10.419070  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:10.419086  681007 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:10.543719  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652850.477584158
	
	I0130 22:14:10.543751  681007 fix.go:206] guest clock: 1706652850.477584158
	I0130 22:14:10.543762  681007 fix.go:219] Guest: 2024-01-30 22:14:10.477584158 +0000 UTC Remote: 2024-01-30 22:14:10.414562089 +0000 UTC m=+301.564256760 (delta=63.022069ms)
	I0130 22:14:10.543828  681007 fix.go:190] guest clock delta is within tolerance: 63.022069ms
	I0130 22:14:10.543837  681007 start.go:83] releasing machines lock for "default-k8s-diff-port-850803", held for 21.853682485s
	I0130 22:14:10.543884  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.544172  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:10.547453  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.547833  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.547907  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.548185  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554556  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554902  681007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:10.554975  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.555050  681007 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:10.555093  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.558413  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559108  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559387  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559438  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559764  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.559857  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.560050  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560137  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.560224  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560350  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560579  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560578  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.560760  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.681106  681007 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:10.688790  681007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:10.845108  681007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:10.853366  681007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:10.853540  681007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:10.873299  681007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:10.873326  681007 start.go:475] detecting cgroup driver to use...
	I0130 22:14:10.873426  681007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:10.891563  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:10.908180  681007 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:10.908258  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:10.921344  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:10.935068  681007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:11.036505  681007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:11.151640  681007 docker.go:233] disabling docker service ...
	I0130 22:14:11.151718  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:11.167082  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:11.178680  681007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:11.303325  681007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:11.410097  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:11.426297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:11.452546  681007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:14:11.452634  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.463081  681007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:11.463156  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.472742  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.482828  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.494761  681007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:11.507028  681007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:11.517686  681007 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:11.517742  681007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:11.530301  681007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:11.541975  681007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:11.696623  681007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:11.913271  681007 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:11.913391  681007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:11.919870  681007 start.go:543] Will wait 60s for crictl version
	I0130 22:14:11.919944  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:14:11.926064  681007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:11.975070  681007 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:11.975177  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.033039  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.081059  681007 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:14:10.570784  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Start
	I0130 22:14:10.571067  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring networks are active...
	I0130 22:14:10.571790  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network default is active
	I0130 22:14:10.572160  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network mk-old-k8s-version-912992 is active
	I0130 22:14:10.572697  680506 main.go:141] libmachine: (old-k8s-version-912992) Getting domain xml...
	I0130 22:14:10.573411  680506 main.go:141] libmachine: (old-k8s-version-912992) Creating domain...
	I0130 22:14:11.948333  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting to get IP...
	I0130 22:14:11.949455  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:11.950018  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:11.950060  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:11.949981  682021 retry.go:31] will retry after 276.511731ms: waiting for machine to come up
	I0130 22:14:12.228702  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.229508  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.229544  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.229445  682021 retry.go:31] will retry after 291.918453ms: waiting for machine to come up
	I0130 22:14:12.522882  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.523484  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.523520  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.523451  682021 retry.go:31] will retry after 411.891157ms: waiting for machine to come up
	I0130 22:14:12.082431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:12.085750  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086144  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:12.086175  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086400  681007 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:12.091494  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:12.104832  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:14:12.104904  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:12.160529  681007 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:14:12.160610  681007 ssh_runner.go:195] Run: which lz4
	I0130 22:14:12.165037  681007 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:12.169743  681007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:12.169772  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:14:11.379194  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.394473  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.254742  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.254788  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.254809  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.438140  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.438192  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.438210  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.470956  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.470985  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.764535  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.773346  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:13.773385  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.264393  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.277818  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:14.277878  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.764145  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.769720  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:14:14.778872  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:14.778910  680821 api_server.go:131] duration metric: took 5.01493889s to wait for apiserver health ...
	I0130 22:14:14.778923  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:14:14.778931  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:14.780880  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:14.782682  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:14.798955  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:14.824975  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:14.841121  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:14.841166  680821 system_pods.go:61] "coredns-5dd5756b68-wcncl" [43c0f4bc-1d47-4337-a179-bb27a4164ca5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:14.841177  680821 system_pods.go:61] "etcd-embed-certs-713938" [f8c3bfda-0fca-429b-a0a2-b4fc1d496085] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:14.841196  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [7536531d-a1bd-451b-8530-143f9a41b85c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:14.841209  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [76c2d0eb-823a-41df-91dc-584acb56f81e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:14.841222  680821 system_pods.go:61] "kube-proxy-4c6nn" [253bee90-32a4-4dc0-9db7-bdfa663bcc96] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:14.841233  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [3b4e8324-e074-45ab-b24c-df1bd226e12e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:14.841247  680821 system_pods.go:61] "metrics-server-57f55c9bc5-hcg7l" [25906794-7927-48cf-8f80-52f8a2a68d99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:14.841265  680821 system_pods.go:61] "storage-provisioner" [5820d2a9-be84-42e8-ac25-d4ac1cf22d90] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:14.841275  680821 system_pods.go:74] duration metric: took 16.275602ms to wait for pod list to return data ...
	I0130 22:14:14.841289  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:14.848145  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:14.848183  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:14.848198  680821 node_conditions.go:105] duration metric: took 6.903129ms to run NodePressure ...
	I0130 22:14:14.848221  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:15.186295  680821 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191845  680821 kubeadm.go:787] kubelet initialised
	I0130 22:14:15.191872  680821 kubeadm.go:788] duration metric: took 5.54389ms waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191883  680821 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:15.202037  680821 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:12.937414  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.938094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.938126  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.937994  682021 retry.go:31] will retry after 576.497569ms: waiting for machine to come up
	I0130 22:14:13.515903  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:13.516521  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:13.516547  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:13.516421  682021 retry.go:31] will retry after 519.706227ms: waiting for machine to come up
	I0130 22:14:14.037307  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.037937  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.037967  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.037845  682021 retry.go:31] will retry after 797.706186ms: waiting for machine to come up
	I0130 22:14:14.836997  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.837662  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.837686  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.837561  682021 retry.go:31] will retry after 782.265584ms: waiting for machine to come up
	I0130 22:14:15.621147  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:15.621747  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:15.621779  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:15.621706  682021 retry.go:31] will retry after 1.00093966s: waiting for machine to come up
	I0130 22:14:16.624002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:16.624474  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:16.624506  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:16.624365  682021 retry.go:31] will retry after 1.760162378s: waiting for machine to come up
	I0130 22:14:14.166451  681007 crio.go:444] Took 2.001438 seconds to copy over tarball
	I0130 22:14:14.166549  681007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:17.707309  681007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.540722039s)
	I0130 22:14:17.707346  681007 crio.go:451] Took 3.540858 seconds to extract the tarball
	I0130 22:14:17.707367  681007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:17.751814  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:17.817529  681007 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:14:17.817564  681007 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:14:17.817650  681007 ssh_runner.go:195] Run: crio config
	I0130 22:14:17.882693  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:17.882719  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:17.882745  681007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:17.882777  681007 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850803 NodeName:default-k8s-diff-port-850803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:14:17.882963  681007 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850803"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:17.883060  681007 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 22:14:17.883125  681007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:14:17.895645  681007 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:17.895725  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:17.906009  681007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0130 22:14:17.923445  681007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:17.941439  681007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0130 22:14:17.958729  681007 ssh_runner.go:195] Run: grep 192.168.50.254	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:17.962941  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:17.975030  681007 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803 for IP: 192.168.50.254
	I0130 22:14:17.975065  681007 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:17.975251  681007 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:17.975300  681007 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:17.975377  681007 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.key
	I0130 22:14:17.975436  681007 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key.c40bdd21
	I0130 22:14:17.975471  681007 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key
	I0130 22:14:17.975603  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:17.975634  681007 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:17.975642  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:17.975665  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:17.975689  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:17.975714  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:17.975751  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:17.976423  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:18.003363  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:18.029597  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:18.053558  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:14:18.077340  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:18.100959  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:18.124756  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:18.148266  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:18.171688  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:18.195020  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:18.221728  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:18.245353  681007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:18.262630  681007 ssh_runner.go:195] Run: openssl version
	I0130 22:14:18.268255  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:18.279361  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284264  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284318  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.290374  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:18.301414  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:18.312992  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317776  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317826  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.323596  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:18.334360  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:18.346052  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350871  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350917  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.358340  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:18.371640  681007 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:18.376906  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:18.383780  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:18.390468  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:18.396506  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:18.402525  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:18.407949  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:18.413375  681007 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:18.413454  681007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:18.413546  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:18.460309  681007 cri.go:89] found id: ""
	I0130 22:14:18.460393  681007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:18.474036  681007 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:18.474062  681007 kubeadm.go:636] restartCluster start
	I0130 22:14:18.474153  681007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:18.484682  681007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:18.486004  681007 kubeconfig.go:92] found "default-k8s-diff-port-850803" server: "https://192.168.50.254:8444"
	I0130 22:14:18.488661  681007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:18.499334  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:18.499389  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:18.512812  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:15.878232  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.047391  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:17.215329  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.367292  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:18.386828  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:18.387291  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:18.387324  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:18.387230  682021 retry.go:31] will retry after 1.961289931s: waiting for machine to come up
	I0130 22:14:20.351407  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:20.351939  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:20.351975  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:20.351883  682021 retry.go:31] will retry after 2.41188295s: waiting for machine to come up
	I0130 22:14:18.999791  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.011386  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.025823  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.499386  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.499505  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.513098  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.000365  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.000469  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.017498  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.500160  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.500286  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.517695  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.000275  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.000409  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.017613  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.499881  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.499974  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.516790  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.000448  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.000562  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.014377  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.499900  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.500014  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.513212  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.999725  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.999875  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.013983  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:23.499549  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.499654  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.515308  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.554357  680786 pod_ready.go:92] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.685256  680786 pod_ready.go:81] duration metric: took 12.815676408s waiting for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.685298  680786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705805  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.705843  680786 pod_ready.go:81] duration metric: took 20.535204ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705859  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716827  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.716859  680786 pod_ready.go:81] duration metric: took 10.990465ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716873  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224601  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.224631  680786 pod_ready.go:81] duration metric: took 507.749018ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224648  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231481  680786 pod_ready.go:92] pod "kube-proxy-phh5j" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.231507  680786 pod_ready.go:81] duration metric: took 6.849925ms waiting for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231519  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237347  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.237372  680786 pod_ready.go:81] duration metric: took 5.84531ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237383  680786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.246204  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:24.248275  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:21.709185  680821 pod_ready.go:92] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:21.709226  680821 pod_ready.go:81] duration metric: took 6.507155774s waiting for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:21.709240  680821 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716371  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.716398  680821 pod_ready.go:81] duration metric: took 2.007151614s waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716407  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722781  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.722803  680821 pod_ready.go:81] duration metric: took 6.390258ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722814  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729034  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.729055  680821 pod_ready.go:81] duration metric: took 6.235103ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729063  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737325  680821 pod_ready.go:92] pod "kube-proxy-4c6nn" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.737348  680821 pod_ready.go:81] duration metric: took 8.279273ms waiting for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737361  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.742989  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.743013  680821 pod_ready.go:81] duration metric: took 5.643901ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.743024  680821 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.766642  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:22.767267  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:22.767359  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:22.767247  682021 retry.go:31] will retry after 2.473522194s: waiting for machine to come up
	I0130 22:14:25.242661  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:25.243221  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:25.243246  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:25.243168  682021 retry.go:31] will retry after 4.117858968s: waiting for machine to come up
	I0130 22:14:23.999813  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.999897  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.012879  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.499381  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.499457  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.513834  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.999458  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.999554  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.014779  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.499957  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.500093  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.513275  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.999800  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.999901  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.011952  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.499447  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.499530  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.511962  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.999473  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.999579  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.012316  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:27.499767  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:27.499862  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.511793  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.000036  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.000127  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.012698  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.499393  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.499495  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.511459  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.511494  681007 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:28.511507  681007 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:28.511522  681007 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:28.511593  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:28.550124  681007 cri.go:89] found id: ""
	I0130 22:14:28.550200  681007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:28.566091  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:28.575952  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:28.576019  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584539  681007 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584559  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:28.715666  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:26.744291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.744825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:25.752959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.250440  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:30.251820  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:29.365529  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366106  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has current primary IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366142  680506 main.go:141] libmachine: (old-k8s-version-912992) Found IP for machine: 192.168.39.84
	I0130 22:14:29.366157  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserving static IP address...
	I0130 22:14:29.366732  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.366763  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserved static IP address: 192.168.39.84
	I0130 22:14:29.366789  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | skip adding static IP to network mk-old-k8s-version-912992 - found existing host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"}
	I0130 22:14:29.366805  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting for SSH to be available...
	I0130 22:14:29.366820  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Getting to WaitForSSH function...
	I0130 22:14:29.369195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369625  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.369648  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369851  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH client type: external
	I0130 22:14:29.369899  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa (-rw-------)
	I0130 22:14:29.369956  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:29.369986  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | About to run SSH command:
	I0130 22:14:29.370002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | exit 0
	I0130 22:14:29.469381  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:29.469800  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetConfigRaw
	I0130 22:14:29.470597  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.473253  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.473721  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.473748  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.474114  680506 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/config.json ...
	I0130 22:14:29.474312  680506 machine.go:88] provisioning docker machine ...
	I0130 22:14:29.474333  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:29.474552  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474741  680506 buildroot.go:166] provisioning hostname "old-k8s-version-912992"
	I0130 22:14:29.474767  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474946  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.477297  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477636  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.477677  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477927  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.478188  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478383  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478541  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.478761  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.479265  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.479291  680506 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-912992 && echo "old-k8s-version-912992" | sudo tee /etc/hostname
	I0130 22:14:29.626924  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-912992
	
	I0130 22:14:29.626957  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.630607  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631062  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.631094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631278  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.631514  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631696  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631891  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.632111  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.632505  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.632524  680506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-912992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-912992/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-912992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:29.777390  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:29.777424  680506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:29.777450  680506 buildroot.go:174] setting up certificates
	I0130 22:14:29.777484  680506 provision.go:83] configureAuth start
	I0130 22:14:29.777504  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.777846  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.781195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781632  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.781682  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781860  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.784395  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784744  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.784776  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784895  680506 provision.go:138] copyHostCerts
	I0130 22:14:29.784960  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:29.784973  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:29.785039  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:29.785139  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:29.785148  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:29.785173  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:29.785231  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:29.785240  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:29.785263  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:29.785404  680506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-912992 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube old-k8s-version-912992]
	I0130 22:14:30.047520  680506 provision.go:172] copyRemoteCerts
	I0130 22:14:30.047582  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:30.047607  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.050409  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050757  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.050790  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050992  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.051204  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.051345  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.051517  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.143197  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:30.164424  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 22:14:30.185497  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:30.207694  680506 provision.go:86] duration metric: configureAuth took 430.192351ms
	I0130 22:14:30.207731  680506 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:30.207938  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:14:30.208031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.210616  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.210984  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.211029  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.211184  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.211404  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211560  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211689  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.211838  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.212146  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.212161  680506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:30.548338  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:30.548369  680506 machine.go:91] provisioned docker machine in 1.074040133s
	I0130 22:14:30.548384  680506 start.go:300] post-start starting for "old-k8s-version-912992" (driver="kvm2")
	I0130 22:14:30.548397  680506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:30.548418  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.548802  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:30.548859  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.552482  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.552909  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.552945  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.553163  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.553368  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.553563  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.553702  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.649611  680506 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:30.654369  680506 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:30.654398  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:30.654527  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:30.654606  680506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:30.654692  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:30.664288  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:30.687603  680506 start.go:303] post-start completed in 139.202965ms
	I0130 22:14:30.687635  680506 fix.go:56] fixHost completed within 20.143642101s
	I0130 22:14:30.687663  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.690292  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690742  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.690780  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690973  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.691179  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691381  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691544  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.691751  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.692061  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.692072  680506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:30.827201  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652870.759760061
	
	I0130 22:14:30.827227  680506 fix.go:206] guest clock: 1706652870.759760061
	I0130 22:14:30.827237  680506 fix.go:219] Guest: 2024-01-30 22:14:30.759760061 +0000 UTC Remote: 2024-01-30 22:14:30.687640253 +0000 UTC m=+368.205420110 (delta=72.119808ms)
	I0130 22:14:30.827264  680506 fix.go:190] guest clock delta is within tolerance: 72.119808ms
	I0130 22:14:30.827276  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 20.283317012s
	I0130 22:14:30.827301  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.827604  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:30.830260  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830761  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.830797  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830974  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831570  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831747  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831856  680506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:30.831925  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.832004  680506 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:30.832031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.834970  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835316  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835340  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835377  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835539  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.835794  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835798  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.835816  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835964  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.836028  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836202  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.836228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.836375  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836573  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.931876  680506 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:30.959543  680506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:31.114259  680506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:31.122360  680506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:31.122498  680506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:31.142608  680506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:31.142637  680506 start.go:475] detecting cgroup driver to use...
	I0130 22:14:31.142709  680506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:31.159940  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:31.177310  680506 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:31.177394  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:31.197811  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:31.215942  680506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:31.341800  680506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:31.476217  680506 docker.go:233] disabling docker service ...
	I0130 22:14:31.476303  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:31.493525  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:31.505631  680506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:31.630766  680506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:31.744997  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:31.760432  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:31.778076  680506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 22:14:31.778156  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.788945  680506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:31.789063  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.799691  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.811057  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.822879  680506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:31.835071  680506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:31.844391  680506 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:31.844478  680506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:31.858948  680506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:31.868566  680506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:31.972874  680506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:32.150449  680506 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:32.150536  680506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:32.155130  680506 start.go:543] Will wait 60s for crictl version
	I0130 22:14:32.155192  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:32.158927  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:32.199472  680506 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:32.199568  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.245662  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.308945  680506 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 22:14:32.310311  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:32.313118  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313548  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:32.313596  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313777  680506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:32.317774  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:32.333291  680506 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 22:14:32.333356  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:32.389401  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:32.389494  680506 ssh_runner.go:195] Run: which lz4
	I0130 22:14:32.394618  680506 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:32.399870  680506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:32.399907  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 22:14:29.354779  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.576966  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.649608  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.729908  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:29.730008  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.230637  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.730130  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.231149  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.730722  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.230159  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.258815  681007 api_server.go:72] duration metric: took 2.528908545s to wait for apiserver process to appear ...
	I0130 22:14:32.258850  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:32.258872  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:31.245860  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:33.256817  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:32.753558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.761674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.208834  680506 crio.go:444] Took 1.814253 seconds to copy over tarball
	I0130 22:14:34.208929  680506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:37.177389  680506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.968423546s)
	I0130 22:14:37.177436  680506 crio.go:451] Took 2.968549 seconds to extract the tarball
	I0130 22:14:37.177450  680506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:37.233540  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:37.291641  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:37.291680  680506 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:14:37.291780  680506 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.291799  680506 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.291820  680506 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 22:14:37.291828  680506 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.291904  680506 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.291802  680506 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.292022  680506 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.291788  680506 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293663  680506 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.293740  680506 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293753  680506 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.293662  680506 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.293800  680506 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.293884  680506 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.492113  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.494903  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.495618  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 22:14:37.508190  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.512582  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.514112  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.259261  681007 api_server.go:269] stopped: https://192.168.50.254:8444/healthz: Get "https://192.168.50.254:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:37.259326  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:37.454899  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:37.454935  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:37.759230  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.420961  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.420997  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.421026  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.429934  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.429972  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.759948  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:35.746244  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.748221  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.252371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.752965  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:40.032924  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.032973  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.032996  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.076077  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.076109  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.259372  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.268746  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.268785  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.759307  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.764886  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:14:40.774834  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:40.774863  681007 api_server.go:131] duration metric: took 8.516004362s to wait for apiserver health ...
	I0130 22:14:40.774875  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:40.774883  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:40.776748  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:37.573794  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.589122  680506 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 22:14:37.589177  680506 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.589222  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.653263  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.661867  680506 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 22:14:37.661918  680506 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.661974  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.681759  680506 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 22:14:37.681810  680506 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 22:14:37.681868  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811285  680506 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 22:14:37.811334  680506 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.811398  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811403  680506 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 22:14:37.811441  680506 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.811507  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811522  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.811592  680506 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 22:14:37.811646  680506 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.811684  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 22:14:37.811508  680506 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 22:14:37.811723  680506 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.811694  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811753  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811648  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.828948  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.887304  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 22:14:37.887396  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.924180  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.934685  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 22:14:37.934737  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.934948  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 22:14:37.951228  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 22:14:37.955310  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 22:14:37.988234  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 22:14:38.007649  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 22:14:38.007710  680506 cache_images.go:92] LoadImages completed in 716.017973ms
	W0130 22:14:38.007789  680506 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0130 22:14:38.007920  680506 ssh_runner.go:195] Run: crio config
	I0130 22:14:38.081077  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:38.081112  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:38.081141  680506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:38.081175  680506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-912992 NodeName:old-k8s-version-912992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 22:14:38.082099  680506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-912992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-912992
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.84:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:38.082244  680506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-912992 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:14:38.082342  680506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 22:14:38.091606  680506 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:38.091676  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:38.100424  680506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 22:14:38.117658  680506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:38.134721  680506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 22:14:38.151680  680506 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:38.155416  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:38.169111  680506 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992 for IP: 192.168.39.84
	I0130 22:14:38.169145  680506 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:38.169305  680506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:38.169342  680506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:38.169412  680506 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.key
	I0130 22:14:38.169506  680506 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key.2e1821a6
	I0130 22:14:38.169547  680506 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key
	I0130 22:14:38.169654  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:38.169689  680506 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:38.169702  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:38.169726  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:38.169753  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:38.169776  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:38.169818  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:38.170542  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:38.195046  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:38.217051  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:38.240099  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 22:14:38.266523  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:38.289237  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:38.313011  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:38.336140  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:38.359683  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:38.382658  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:38.407558  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:38.435231  680506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:38.453753  680506 ssh_runner.go:195] Run: openssl version
	I0130 22:14:38.459339  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:38.469159  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474001  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474079  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.479508  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:38.489049  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:38.498644  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503289  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503340  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.508873  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:38.518533  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:38.527871  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532447  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532493  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.538832  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:38.549398  680506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:38.553860  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:38.559537  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:38.565050  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:38.570705  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:38.576386  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:38.581918  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:38.587630  680506 kubeadm.go:404] StartCluster: {Name:old-k8s-version-912992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:38.587746  680506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:38.587803  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:38.630328  680506 cri.go:89] found id: ""
	I0130 22:14:38.630420  680506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:38.642993  680506 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:38.643026  680506 kubeadm.go:636] restartCluster start
	I0130 22:14:38.643095  680506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:38.653192  680506 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:38.654325  680506 kubeconfig.go:92] found "old-k8s-version-912992" server: "https://192.168.39.84:8443"
	I0130 22:14:38.656891  680506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:38.666689  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:38.666762  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:38.678857  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.167457  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.167543  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.179779  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.667279  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.667371  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.679872  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.167509  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.167607  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.181001  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.666977  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.667063  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.679278  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.167767  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.167850  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.182139  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.667595  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.667687  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.681165  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:42.167790  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.167888  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.180444  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.777979  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:40.798593  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:40.826400  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:40.839821  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:40.839847  681007 system_pods.go:61] "coredns-5dd5756b68-t65nr" [1379e1d2-263a-4d35-a630-4e197767b62d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:40.839856  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [e8468358-fd44-4f0e-b54b-13e9a478e259] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:40.839868  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [2e35ea0f-78e5-41b4-965a-c428408f84eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:40.839877  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [669d8c85-812f-4bfc-b3bb-7f5041ca8514] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:40.839890  681007 system_pods.go:61] "kube-proxy-9v5rw" [e97b697b-472b-4b3d-886b-39786c1b3760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:40.839905  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [956ec644-071b-4390-b63e-8cbe9ad2a350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:40.839918  681007 system_pods.go:61] "metrics-server-57f55c9bc5-wlzw4" [3d2bfab3-e9e2-484b-8b8d-779869cbcf9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:40.839927  681007 system_pods.go:61] "storage-provisioner" [e87ce7ad-4933-41b6-8e20-91a4e9ecc45c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:40.839934  681007 system_pods.go:74] duration metric: took 13.512695ms to wait for pod list to return data ...
	I0130 22:14:40.839942  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:40.843711  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:40.843736  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:40.843747  681007 node_conditions.go:105] duration metric: took 3.799992ms to run NodePressure ...
	I0130 22:14:40.843762  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:41.200590  681007 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205872  681007 kubeadm.go:787] kubelet initialised
	I0130 22:14:41.205892  681007 kubeadm.go:788] duration metric: took 5.278409ms waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205899  681007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:41.214192  681007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:43.221105  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.787175  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.243973  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.244009  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.250982  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.751725  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.667181  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.667264  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.679726  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.167750  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.167867  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.179954  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.667584  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.667715  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.680828  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.167107  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.167263  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.183107  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.667674  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.667749  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.680942  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.167589  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.167689  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.180786  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.667715  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.667811  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.681199  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.167671  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.167764  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.181276  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.666810  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.666952  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.680935  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:47.167612  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.167711  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.180385  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.221153  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.221375  681007 pod_ready.go:92] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:47.221398  681007 pod_ready.go:81] duration metric: took 6.00718187s waiting for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:47.221411  681007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:46.244096  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:48.245476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:46.755543  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:49.252337  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.667527  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.667633  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.680519  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.167564  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.167659  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.179815  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.667656  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.667733  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.682679  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.682711  680506 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:48.682722  680506 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:48.682735  680506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:48.682788  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:48.726311  680506 cri.go:89] found id: ""
	I0130 22:14:48.726399  680506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:48.744504  680506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:48.755471  680506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:48.755523  680506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765613  680506 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765636  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:48.886214  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:49.873929  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.090456  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.199471  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.278504  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:50.278604  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:50.779646  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.279488  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.779657  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.829813  680506 api_server.go:72] duration metric: took 1.551314483s to wait for apiserver process to appear ...
	I0130 22:14:51.829852  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:51.829888  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:51.830469  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": dial tcp 192.168.39.84:8443: connect: connection refused
	I0130 22:14:52.330162  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:49.228581  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.230115  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.228169  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.228193  681007 pod_ready.go:81] duration metric: took 6.006776273s waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.228201  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233723  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.233746  681007 pod_ready.go:81] duration metric: took 5.53858ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233754  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238962  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.238983  681007 pod_ready.go:81] duration metric: took 5.221325ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238994  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247623  681007 pod_ready.go:92] pod "kube-proxy-9v5rw" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.247646  681007 pod_ready.go:81] duration metric: took 8.643709ms waiting for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247657  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254079  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.254102  681007 pod_ready.go:81] duration metric: took 6.435694ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254113  681007 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:50.745213  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.245163  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.252956  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.750853  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.331302  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:57.331361  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:55.262286  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.762588  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:55.245641  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.246341  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:58.248157  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.248193  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.248223  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.329248  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.329276  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.330342  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.349249  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.349288  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:58.830998  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.836484  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.836510  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.330646  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.337516  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:59.337559  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.830016  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.836129  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:14:59.846684  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:14:59.846741  680506 api_server.go:131] duration metric: took 8.016878739s to wait for apiserver health ...
	I0130 22:14:59.846760  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:59.846770  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:59.848874  680506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:55.751242  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.755048  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:00.251809  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.850215  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:59.860069  680506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:59.880017  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:59.891300  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:14:59.891330  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:14:59.891335  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:14:59.891340  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:14:59.891345  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Pending
	I0130 22:14:59.891349  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:14:59.891352  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:14:59.891360  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:14:59.891368  680506 system_pods.go:74] duration metric: took 11.331282ms to wait for pod list to return data ...
	I0130 22:14:59.891377  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:59.895522  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:59.895558  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:59.895571  680506 node_conditions.go:105] duration metric: took 4.184167ms to run NodePressure ...
	I0130 22:14:59.895591  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:15:00.214560  680506 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218844  680506 kubeadm.go:787] kubelet initialised
	I0130 22:15:00.218863  680506 kubeadm.go:788] duration metric: took 4.278574ms waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218870  680506 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:00.223310  680506 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.228349  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228371  680506 pod_ready.go:81] duration metric: took 5.033709ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.228380  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228385  680506 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.236353  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236378  680506 pod_ready.go:81] duration metric: took 7.981988ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.236387  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236394  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.244477  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244504  680506 pod_ready.go:81] duration metric: took 8.099653ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.244521  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244531  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.283561  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283590  680506 pod_ready.go:81] duration metric: took 39.047028ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.283602  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283610  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.683495  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683524  680506 pod_ready.go:81] duration metric: took 399.906973ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.683537  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683544  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:01.084061  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084093  680506 pod_ready.go:81] duration metric: took 400.538074ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:01.084107  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084117  680506 pod_ready.go:38] duration metric: took 865.238684ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:01.084149  680506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:15:01.120344  680506 ops.go:34] apiserver oom_adj: -16
	I0130 22:15:01.120372  680506 kubeadm.go:640] restartCluster took 22.477337631s
	I0130 22:15:01.120384  680506 kubeadm.go:406] StartCluster complete in 22.532762257s
	I0130 22:15:01.120408  680506 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.120536  680506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:15:01.123018  680506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.123321  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:15:01.123514  680506 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:15:01.123624  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:15:01.123662  680506 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123683  680506 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123701  680506 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-912992"
	W0130 22:15:01.123709  680506 addons.go:243] addon metrics-server should already be in state true
	I0130 22:15:01.123745  680506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-912992"
	I0130 22:15:01.123769  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124153  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124178  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.124189  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124218  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.123635  680506 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-912992"
	I0130 22:15:01.124295  680506 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-912992"
	W0130 22:15:01.124303  680506 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:15:01.124357  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124693  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124741  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.141006  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0130 22:15:01.141022  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0130 22:15:01.141594  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.141697  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.142122  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142142  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142273  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142297  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142793  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.142837  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0130 22:15:01.142797  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.143291  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.143380  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.143411  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.143758  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.143786  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.144174  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.144210  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.144212  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.144438  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.148328  680506 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-912992"
	W0130 22:15:01.148350  680506 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:15:01.148378  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.148706  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.148734  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.163324  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0130 22:15:01.163720  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0130 22:15:01.164054  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164187  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164638  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164665  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.164806  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164817  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.165086  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165242  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165310  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.165844  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.167686  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.170253  680506 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:15:01.168142  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.169379  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0130 22:15:01.172172  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:15:01.172200  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:15:01.172228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.174608  680506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:15:01.173335  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.175891  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.176824  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.177101  680506 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.177110  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.177116  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:15:01.177134  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.177137  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.177239  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.177855  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.178037  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.181184  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181626  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.181644  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181879  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.182032  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.182215  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.182321  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.182343  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.182745  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.182805  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.183262  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.183296  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.218510  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0130 22:15:01.218955  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.219566  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.219598  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.219976  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.220136  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.221882  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.222143  680506 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.222161  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:15:01.222178  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.225129  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225437  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.225454  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225732  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.225875  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.225948  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.226015  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.362950  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.405756  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:15:01.405829  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:15:01.442804  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.468468  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:15:01.468501  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:15:01.514493  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.514530  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:15:01.531543  680506 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 22:15:01.551886  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.697743  680506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-912992" context rescaled to 1 replicas
	I0130 22:15:01.697805  680506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:15:01.699954  680506 out.go:177] * Verifying Kubernetes components...
	I0130 22:15:01.701746  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078654  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078682  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078704  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078736  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078751  680506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:02.079190  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079200  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079221  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079229  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079231  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079235  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079245  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079246  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079200  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079257  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079266  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079665  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079685  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079695  680506 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-912992"
	I0130 22:15:02.079699  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079719  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.081702  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081725  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.081736  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.081746  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.081969  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081999  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.087366  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.087387  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.087642  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.087661  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.089698  680506 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 22:15:02.091156  680506 addons.go:505] enable addons completed in 967.651598ms: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 22:14:59.767179  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.262656  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.743796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:01.745268  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.245639  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.754252  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:05.250850  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.082265  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:06.582230  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:04.764379  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.764868  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.765839  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.744476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.744978  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.584004  680506 node_ready.go:49] node "old-k8s-version-912992" has status "Ready":"True"
	I0130 22:15:08.584038  680506 node_ready.go:38] duration metric: took 6.50526711s waiting for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:08.584052  680506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:08.591084  680506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595709  680506 pod_ready.go:92] pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.595735  680506 pod_ready.go:81] duration metric: took 4.623355ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595747  680506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600152  680506 pod_ready.go:92] pod "etcd-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.600175  680506 pod_ready.go:81] duration metric: took 4.419847ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600186  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604426  680506 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.604444  680506 pod_ready.go:81] duration metric: took 4.249901ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604454  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608671  680506 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.608685  680506 pod_ready.go:81] duration metric: took 4.224838ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608694  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984275  680506 pod_ready.go:92] pod "kube-proxy-qm7xx" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.984306  680506 pod_ready.go:81] duration metric: took 375.604271ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984321  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384278  680506 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:09.384303  680506 pod_ready.go:81] duration metric: took 399.974439ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384316  680506 pod_ready.go:38] duration metric: took 800.249209ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:09.384331  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:15:09.384383  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:15:09.399639  680506 api_server.go:72] duration metric: took 7.701783762s to wait for apiserver process to appear ...
	I0130 22:15:09.399665  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:15:09.399683  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:15:09.406824  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:15:09.407829  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:15:09.407850  680506 api_server.go:131] duration metric: took 8.177146ms to wait for apiserver health ...
	I0130 22:15:09.407860  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:15:09.584994  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:15:09.585031  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.585039  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.585046  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.585053  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.585059  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.585065  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.585072  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.585080  680506 system_pods.go:74] duration metric: took 177.213093ms to wait for pod list to return data ...
	I0130 22:15:09.585092  680506 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:15:09.784286  680506 default_sa.go:45] found service account: "default"
	I0130 22:15:09.784313  680506 default_sa.go:55] duration metric: took 199.211541ms for default service account to be created ...
	I0130 22:15:09.784322  680506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:15:09.987063  680506 system_pods.go:86] 7 kube-system pods found
	I0130 22:15:09.987094  680506 system_pods.go:89] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.987103  680506 system_pods.go:89] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.987109  680506 system_pods.go:89] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.987114  680506 system_pods.go:89] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.987120  680506 system_pods.go:89] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.987125  680506 system_pods.go:89] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.987131  680506 system_pods.go:89] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.987140  680506 system_pods.go:126] duration metric: took 202.811673ms to wait for k8s-apps to be running ...
	I0130 22:15:09.987150  680506 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:15:09.987206  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:10.001966  680506 system_svc.go:56] duration metric: took 14.805505ms WaitForService to wait for kubelet.
	I0130 22:15:10.001997  680506 kubeadm.go:581] duration metric: took 8.30415043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:15:10.002022  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:15:10.184699  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:15:10.184743  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:15:10.184756  680506 node_conditions.go:105] duration metric: took 182.728475ms to run NodePressure ...
	I0130 22:15:10.184772  680506 start.go:228] waiting for startup goroutines ...
	I0130 22:15:10.184782  680506 start.go:233] waiting for cluster config update ...
	I0130 22:15:10.184796  680506 start.go:242] writing updated cluster config ...
	I0130 22:15:10.185114  680506 ssh_runner.go:195] Run: rm -f paused
	I0130 22:15:10.239744  680506 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 22:15:10.241916  680506 out.go:177] 
	W0130 22:15:10.243307  680506 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 22:15:10.244540  680506 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 22:15:10.245844  680506 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-912992" cluster and "default" namespace by default
	I0130 22:15:07.753442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.250385  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.770107  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.262302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:11.244598  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.744540  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:12.252794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:14.750293  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:15.761573  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:17.764138  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.245719  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.744763  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.751093  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.751144  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:19.766344  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:22.262506  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.243857  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.244633  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.250405  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.752715  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:24.762412  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.260985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:25.744105  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.746611  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:26.250066  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:28.250115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.251911  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:29.262020  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:31.763782  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.243836  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.244064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.244535  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.754073  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:35.249927  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.260099  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.262332  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.262515  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.245173  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.747970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:37.252466  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:39.254833  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:40.264075  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:42.763978  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.244902  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.246545  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.750938  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.751361  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.262599  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.769508  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.743965  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.745769  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:46.250381  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:48.250841  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.262796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.763728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:49.746064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:51.750634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.244634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.750564  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.751105  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.751544  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:55.261060  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:57.262293  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.245111  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:58.246787  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.751681  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.250409  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.762572  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.765901  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:00.744216  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:02.744765  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.750473  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.252199  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.267246  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.764985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:05.252271  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:07.745483  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.252327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:08.750460  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:09.263071  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.764448  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:10.244124  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:12.245643  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.248183  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.254631  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:13.752086  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.262534  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.763532  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.744988  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.746562  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.251554  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.751130  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:19.261302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.262097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.764162  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.243403  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.245825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:20.751443  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.251248  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:26.261011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.263281  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.744554  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:27.744970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.750244  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.249555  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.250246  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.761252  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.762070  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:29.745453  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.243772  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.245396  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.251218  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.752524  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:35.261942  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.264695  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:36.745702  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.244617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.250645  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.251192  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.762454  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.765643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.244956  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.245892  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.750084  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.751479  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:44.262004  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.262160  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.763669  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:45.744222  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:47.745591  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.249746  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.250654  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.252500  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:51.261603  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:53.261672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.244099  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.744215  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.749766  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.750634  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:55.261803  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:57.262915  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.744549  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.745030  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.244809  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.751851  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.258417  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.268254  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.761347  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.761999  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.246996  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.744672  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.750976  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.751083  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:05.763147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.264472  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.244449  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.244796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.250266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.250718  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.761567  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.762159  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.245064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.744572  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.750221  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.750688  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.752051  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:15.261414  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.262083  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.745621  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.243837  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.244825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.250798  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.251873  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.262614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.761873  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.762158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.245432  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.745684  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.750760  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:24.252401  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:25.762960  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.261732  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.246290  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.744375  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.749794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.750363  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:30.262011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:32.762896  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.243646  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.245351  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.251364  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.750995  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.262828  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.763644  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.245530  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.246211  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.752489  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.251704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.261365  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.261786  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:39.745084  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:41.746617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.244143  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.750921  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:45.251115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.262664  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.764196  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.769165  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.744967  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.745930  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:47.751743  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:50.250561  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.261754  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.764405  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.244859  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.744487  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:52.254402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:54.751442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:56.260885  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.261304  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:55.747588  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.244383  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:57.250767  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:59.750343  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.262535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.762755  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.248648  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.744883  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:01.751253  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:03.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:04.763841  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.263079  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:05.244262  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.244758  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.245079  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:06.252399  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:08.750732  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.263723  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.766305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.771997  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.744688  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:14.243700  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:10.751691  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.254909  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.263146  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.764654  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.244291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.250725  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:15.751459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:17.752591  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.251354  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:21.263171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.762025  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.238489  680786 pod_ready.go:81] duration metric: took 4m0.001085938s waiting for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:20.238561  680786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:20.238585  680786 pod_ready.go:38] duration metric: took 4m13.374837351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:20.238635  680786 kubeadm.go:640] restartCluster took 4m32.952408079s
	W0130 22:18:20.238771  680786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:20.238897  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:22.752701  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.743814  680821 pod_ready.go:81] duration metric: took 4m0.000772856s waiting for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:23.743843  680821 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:23.743867  680821 pod_ready.go:38] duration metric: took 4m8.55197109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:23.743901  680821 kubeadm.go:640] restartCluster took 4m27.679173945s
	W0130 22:18:23.743979  680821 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:23.744016  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:25.762818  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:27.766206  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:30.262706  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:32.263895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:33.696118  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.457184259s)
	I0130 22:18:33.696246  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:33.709756  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:33.719095  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:33.727249  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:33.727304  680786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:33.783803  680786 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0130 22:18:33.783934  680786 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:33.947330  680786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:33.947473  680786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:33.947594  680786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:34.185129  680786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:34.186847  680786 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:34.186958  680786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:34.187047  680786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:34.187130  680786 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:34.187254  680786 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:34.187590  680786 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:34.188233  680786 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:34.188591  680786 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:34.189435  680786 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:34.189737  680786 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:34.190284  680786 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:34.190677  680786 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:34.190788  680786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:34.357057  680786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:34.468135  680786 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0130 22:18:34.785137  680786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:34.900902  680786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:34.973785  680786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:34.974693  680786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:34.977481  680786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:37.518038  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.773993992s)
	I0130 22:18:37.518130  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:37.533148  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:37.542965  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:37.552859  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:37.552915  680821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:37.614837  680821 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:18:37.614964  680821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:37.783252  680821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:37.783431  680821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:37.783598  680821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:38.009789  680821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:38.011805  680821 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:38.011921  680821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:38.012010  680821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:38.012140  680821 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:38.012573  680821 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:38.013135  680821 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:38.014103  680821 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:38.015459  680821 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:38.016522  680821 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:38.017879  680821 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:38.018669  680821 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:38.019318  680821 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:38.019416  680821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:38.190496  680821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:38.487122  680821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:38.567485  680821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:38.764572  680821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:38.765081  680821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:38.771540  680821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:34.761686  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:36.763512  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:38.772838  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:34.979275  680786 out.go:204]   - Booting up control plane ...
	I0130 22:18:34.979394  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:34.979502  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:34.979687  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:35.000161  680786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:35.001100  680786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:35.001180  680786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:35.143762  680786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:38.773177  680821 out.go:204]   - Booting up control plane ...
	I0130 22:18:38.773326  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:38.773447  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:38.774160  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:38.793263  680821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:38.793414  680821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:38.793489  680821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:38.942605  680821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:41.263027  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.264305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.147099  680786 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003222 seconds
	I0130 22:18:43.165914  680786 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:43.183810  680786 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:43.729066  680786 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:43.729309  680786 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-023824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:44.247224  680786 kubeadm.go:322] [bootstrap-token] Using token: 8v59zo.bsn08ubvfg01lew3
	I0130 22:18:44.248930  680786 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:44.249075  680786 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:44.256127  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:44.265628  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:44.269906  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:44.278100  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:44.283097  680786 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:44.301902  680786 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:44.542713  680786 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:44.665337  680786 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:44.665367  680786 kubeadm.go:322] 
	I0130 22:18:44.665448  680786 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:44.665463  680786 kubeadm.go:322] 
	I0130 22:18:44.665573  680786 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:44.665583  680786 kubeadm.go:322] 
	I0130 22:18:44.665660  680786 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:44.665761  680786 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:44.665830  680786 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:44.665840  680786 kubeadm.go:322] 
	I0130 22:18:44.665909  680786 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:44.665927  680786 kubeadm.go:322] 
	I0130 22:18:44.665994  680786 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:44.666003  680786 kubeadm.go:322] 
	I0130 22:18:44.666084  680786 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:44.666220  680786 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:44.666324  680786 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:44.666349  680786 kubeadm.go:322] 
	I0130 22:18:44.666456  680786 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:44.666544  680786 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:44.666551  680786 kubeadm.go:322] 
	I0130 22:18:44.666646  680786 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.666764  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:44.666789  680786 kubeadm.go:322] 	--control-plane 
	I0130 22:18:44.666795  680786 kubeadm.go:322] 
	I0130 22:18:44.666898  680786 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:44.666906  680786 kubeadm.go:322] 
	I0130 22:18:44.667000  680786 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.667121  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:44.667741  680786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:44.667773  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:18:44.667784  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:44.669613  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:47.444081  680821 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502071 seconds
	I0130 22:18:47.444241  680821 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:47.470140  680821 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:48.014141  680821 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:48.014385  680821 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-713938 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:48.528168  680821 kubeadm.go:322] [bootstrap-token] Using token: 5j3t7l.lolt26xy60ozf3ca
	I0130 22:18:45.765205  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.261716  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.529669  680821 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:48.529807  680821 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:48.544442  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:48.552536  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:48.555846  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:48.559711  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:48.563810  680821 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:48.580095  680821 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:48.820236  680821 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:48.950911  680821 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:48.951833  680821 kubeadm.go:322] 
	I0130 22:18:48.951927  680821 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:48.951958  680821 kubeadm.go:322] 
	I0130 22:18:48.952042  680821 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:48.952063  680821 kubeadm.go:322] 
	I0130 22:18:48.952089  680821 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:48.952144  680821 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:48.952190  680821 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:48.952196  680821 kubeadm.go:322] 
	I0130 22:18:48.952267  680821 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:48.952287  680821 kubeadm.go:322] 
	I0130 22:18:48.952346  680821 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:48.952356  680821 kubeadm.go:322] 
	I0130 22:18:48.952439  680821 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:48.952554  680821 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:48.952661  680821 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:48.952671  680821 kubeadm.go:322] 
	I0130 22:18:48.952805  680821 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:48.952894  680821 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:48.952906  680821 kubeadm.go:322] 
	I0130 22:18:48.953001  680821 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953139  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:48.953177  680821 kubeadm.go:322] 	--control-plane 
	I0130 22:18:48.953189  680821 kubeadm.go:322] 
	I0130 22:18:48.953296  680821 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:48.953306  680821 kubeadm.go:322] 
	I0130 22:18:48.953413  680821 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953555  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:48.954606  680821 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:48.954659  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:18:48.954677  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:48.956379  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:44.671035  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:44.696043  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:44.785738  680786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:44.785867  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.785894  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=no-preload-023824 minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.887327  680786 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:45.135926  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:45.636755  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.136406  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.636077  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.136080  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.636924  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.136830  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.636945  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.136038  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.957922  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:48.974487  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:49.035551  680821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=embed-certs-713938 minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.085285  680821 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:49.366490  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.866648  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.366789  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.761888  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:52.765352  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:53.254549  681007 pod_ready.go:81] duration metric: took 4m0.000414494s waiting for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:53.254593  681007 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:53.254623  681007 pod_ready.go:38] duration metric: took 4m12.048715105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:53.254662  681007 kubeadm.go:640] restartCluster took 4m34.780590329s
	W0130 22:18:53.254758  681007 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:53.254793  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:49.635946  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.136681  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.636090  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.136427  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.636232  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.136032  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.636639  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.136839  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.636957  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.136140  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.866857  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.367211  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.867291  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.366659  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.867351  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.366925  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.867180  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.366846  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.866651  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.366588  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.636246  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.136047  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.636970  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.136258  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.636239  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.136269  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.262159  680786 kubeadm.go:1088] duration metric: took 12.476361074s to wait for elevateKubeSystemPrivileges.
	I0130 22:18:57.262235  680786 kubeadm.go:406] StartCluster complete in 5m10.025020914s
	I0130 22:18:57.262288  680786 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.262417  680786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:18:57.265204  680786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.265504  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:18:57.265655  680786 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:18:57.265746  680786 addons.go:69] Setting storage-provisioner=true in profile "no-preload-023824"
	I0130 22:18:57.265769  680786 addons.go:234] Setting addon storage-provisioner=true in "no-preload-023824"
	W0130 22:18:57.265784  680786 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:18:57.265774  680786 addons.go:69] Setting default-storageclass=true in profile "no-preload-023824"
	I0130 22:18:57.265812  680786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-023824"
	I0130 22:18:57.265838  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:18:57.265817  680786 addons.go:69] Setting metrics-server=true in profile "no-preload-023824"
	I0130 22:18:57.265880  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.265898  680786 addons.go:234] Setting addon metrics-server=true in "no-preload-023824"
	W0130 22:18:57.265925  680786 addons.go:243] addon metrics-server should already be in state true
	I0130 22:18:57.265973  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266315  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266349  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266376  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266416  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.286273  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0130 22:18:57.286366  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0130 22:18:57.286463  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0130 22:18:57.287691  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287692  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287851  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.288302  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288323  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288428  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288439  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288511  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288524  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288850  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.288897  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289215  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289405  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289437  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289685  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289719  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289792  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.293877  680786 addons.go:234] Setting addon default-storageclass=true in "no-preload-023824"
	W0130 22:18:57.293899  680786 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:18:57.293928  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.294325  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.294356  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.310259  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0130 22:18:57.310765  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.311270  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.311289  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.311818  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.312317  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.313547  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0130 22:18:57.314105  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.314665  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.314686  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.314752  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.316570  680786 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:18:57.315368  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.317812  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:18:57.317835  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:18:57.317858  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.318173  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.318194  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.321603  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.321671  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0130 22:18:57.321961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.322001  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.322280  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.322296  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.322491  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	W0130 22:18:57.322819  680786 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-023824" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0130 22:18:57.322843  680786 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:18:57.322866  680786 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:18:57.324267  680786 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:57.323003  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.323084  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.325567  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.325663  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:57.325909  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.326903  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.327113  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.329169  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.331160  680786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:18:57.332481  680786 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.332500  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:18:57.332519  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.336038  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336525  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.336546  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336746  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.336901  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.337031  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.337256  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.338027  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0130 22:18:57.338387  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.339078  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.339097  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.339406  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.339628  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.341385  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.341687  680786 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.341705  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:18:57.341725  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.344745  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345159  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.345180  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345408  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.345613  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.349708  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.349906  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.525974  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.531582  680786 node_ready.go:35] waiting up to 6m0s for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.532157  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:18:57.546542  680786 node_ready.go:49] node "no-preload-023824" has status "Ready":"True"
	I0130 22:18:57.546575  680786 node_ready.go:38] duration metric: took 14.926402ms waiting for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.546592  680786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:57.573983  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:18:57.589817  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:18:57.589854  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:18:57.684894  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:18:57.684926  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:18:57.715247  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.726490  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:57.726521  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:18:57.824368  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:58.842258  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.316238822s)
	I0130 22:18:58.842310  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842327  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842341  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.310137299s)
	I0130 22:18:58.842386  680786 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0130 22:18:58.842447  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.127164198s)
	I0130 22:18:58.842474  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842486  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842830  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842870  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842893  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842898  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842900  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842921  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842924  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842931  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842937  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842948  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.843222  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843243  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.843456  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843469  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.885944  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.885978  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.886311  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.888268  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.888288  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228029  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.403587938s)
	I0130 22:18:59.228205  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228233  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.228672  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.228714  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.228738  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228749  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228762  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.229119  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.229182  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.229197  680786 addons.go:470] Verifying addon metrics-server=true in "no-preload-023824"
	I0130 22:18:59.229126  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.230815  680786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:18:59.232158  680786 addons.go:505] enable addons completed in 1.966513856s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:18:55.867390  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.367181  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.866689  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.366578  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.867406  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.366702  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.867537  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.366860  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.867263  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.366507  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.866976  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.994251  680821 kubeadm.go:1088] duration metric: took 11.958653294s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:00.994309  680821 kubeadm.go:406] StartCluster complete in 5m4.981146882s
	I0130 22:19:00.994337  680821 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.994437  680821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:00.997310  680821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.997649  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:00.997866  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:00.997819  680821 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:00.997932  680821 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-713938"
	I0130 22:19:00.997951  680821 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-713938"
	W0130 22:19:00.997962  680821 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:00.997978  680821 addons.go:69] Setting metrics-server=true in profile "embed-certs-713938"
	I0130 22:19:00.997979  680821 addons.go:69] Setting default-storageclass=true in profile "embed-certs-713938"
	I0130 22:19:00.997994  680821 addons.go:234] Setting addon metrics-server=true in "embed-certs-713938"
	W0130 22:19:00.998002  680821 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:00.998009  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998012  680821 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-713938"
	I0130 22:19:00.998035  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998425  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998450  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.018726  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0130 22:19:01.018744  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0130 22:19:01.018754  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0130 22:19:01.019224  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019255  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019329  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019860  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.019890  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020012  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020062  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.020311  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020379  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020530  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.020984  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.021001  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021030  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.021533  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021581  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.024902  680821 addons.go:234] Setting addon default-storageclass=true in "embed-certs-713938"
	W0130 22:19:01.024926  680821 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:01.024955  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:01.025333  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.025372  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.041760  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0130 22:19:01.043510  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0130 22:19:01.043937  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.043980  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.044434  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044454  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.044864  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044902  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.045102  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045331  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045686  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.045730  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.045952  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.049065  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0130 22:19:01.049076  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.051101  680821 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:01.049716  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.052918  680821 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.052937  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:01.052959  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.055109  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.055135  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.057586  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.057591  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057611  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.057625  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057656  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.057829  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.057831  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.057974  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.058123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.063470  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.065048  680821 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:01.066385  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:01.066404  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:01.066425  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.066427  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I0130 22:19:01.067271  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.067806  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.067834  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.068198  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.068403  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.069684  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070069  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.070133  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.070162  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070347  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.070369  680821 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.070381  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:01.070402  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.073308  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073914  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.073945  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073978  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074155  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074207  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.074325  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.074346  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074441  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074534  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.210631  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.237088  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.307032  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:01.307130  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:01.368366  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:01.368405  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:01.388184  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:01.443355  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.443414  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:01.558399  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.610498  680821 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-713938" context rescaled to 1 replicas
	I0130 22:19:01.610545  680821 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:01.612750  680821 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:59.584739  680786 pod_ready.go:102] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:01.089751  680786 pod_ready.go:92] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.089826  680786 pod_ready.go:81] duration metric: took 3.515759187s waiting for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.089853  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098560  680786 pod_ready.go:92] pod "coredns-76f75df574-znj8f" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.098645  680786 pod_ready.go:81] duration metric: took 8.774285ms waiting for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098671  680786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.106943  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.107036  680786 pod_ready.go:81] duration metric: took 8.345837ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.107062  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120384  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.120413  680786 pod_ready.go:81] duration metric: took 13.332445ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120427  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129739  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.129825  680786 pod_ready.go:81] duration metric: took 9.387442ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129850  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282077  680786 pod_ready.go:92] pod "kube-proxy-8rn6v" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.282110  680786 pod_ready.go:81] duration metric: took 1.152243055s waiting for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282123  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681191  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.681221  680786 pod_ready.go:81] duration metric: took 399.089453ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681232  680786 pod_ready.go:38] duration metric: took 5.134627161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:02.681249  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:19:02.681313  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:19:02.695239  680786 api_server.go:72] duration metric: took 5.372338357s to wait for apiserver process to appear ...
	I0130 22:19:02.695265  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:19:02.695291  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:19:02.700070  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:19:02.701235  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:19:02.701266  680786 api_server.go:131] duration metric: took 5.988974ms to wait for apiserver health ...
	I0130 22:19:02.701279  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:19:02.899520  680786 system_pods.go:59] 9 kube-system pods found
	I0130 22:19:02.899558  680786 system_pods.go:61] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:02.899565  680786 system_pods.go:61] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:02.899572  680786 system_pods.go:61] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:02.899579  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:02.899586  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:02.899592  680786 system_pods.go:61] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:02.899599  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:02.899610  680786 system_pods.go:61] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:02.899626  680786 system_pods.go:61] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:02.899637  680786 system_pods.go:74] duration metric: took 198.349705ms to wait for pod list to return data ...
	I0130 22:19:02.899649  680786 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:19:03.080624  680786 default_sa.go:45] found service account: "default"
	I0130 22:19:03.080668  680786 default_sa.go:55] duration metric: took 181.003649ms for default service account to be created ...
	I0130 22:19:03.080681  680786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:19:03.285004  680786 system_pods.go:86] 9 kube-system pods found
	I0130 22:19:03.285040  680786 system_pods.go:89] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:03.285048  680786 system_pods.go:89] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:03.285056  680786 system_pods.go:89] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:03.285063  680786 system_pods.go:89] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:03.285069  680786 system_pods.go:89] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:03.285073  680786 system_pods.go:89] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:03.285078  680786 system_pods.go:89] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:03.285089  680786 system_pods.go:89] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:03.285097  680786 system_pods.go:89] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:03.285107  680786 system_pods.go:126] duration metric: took 204.418927ms to wait for k8s-apps to be running ...
	I0130 22:19:03.285117  680786 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:19:03.285172  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.303077  680786 system_svc.go:56] duration metric: took 17.949308ms WaitForService to wait for kubelet.
	I0130 22:19:03.303108  680786 kubeadm.go:581] duration metric: took 5.980212644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:19:03.303133  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:19:03.481755  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:19:03.481794  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:19:03.481804  680786 node_conditions.go:105] duration metric: took 178.666283ms to run NodePressure ...
	I0130 22:19:03.481816  680786 start.go:228] waiting for startup goroutines ...
	I0130 22:19:03.481822  680786 start.go:233] waiting for cluster config update ...
	I0130 22:19:03.481860  680786 start.go:242] writing updated cluster config ...
	I0130 22:19:03.482145  680786 ssh_runner.go:195] Run: rm -f paused
	I0130 22:19:03.549733  680786 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 22:19:03.551653  680786 out.go:177] * Done! kubectl is now configured to use "no-preload-023824" cluster and "default" namespace by default
	I0130 22:19:01.614025  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.810450  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.573311695s)
	I0130 22:19:03.810519  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810531  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810592  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599920536s)
	I0130 22:19:03.810625  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422412443s)
	I0130 22:19:03.810639  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810653  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810640  680821 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:03.811010  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811010  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811035  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811034  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811038  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811045  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811055  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811056  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811065  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811074  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811299  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811317  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811626  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811677  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811686  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838002  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.838036  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.838339  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.838364  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838384  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842042  680821 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.227988129s)
	I0130 22:19:03.842085  680821 node_ready.go:35] waiting up to 6m0s for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.842321  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.283887868s)
	I0130 22:19:03.842355  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842369  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.842728  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842753  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.842761  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.842772  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842784  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.843015  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.843031  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.843042  680821 addons.go:470] Verifying addon metrics-server=true in "embed-certs-713938"
	I0130 22:19:03.844872  680821 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:03.846361  680821 addons.go:505] enable addons completed in 2.848549166s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:03.857259  680821 node_ready.go:49] node "embed-certs-713938" has status "Ready":"True"
	I0130 22:19:03.857281  680821 node_ready.go:38] duration metric: took 15.183316ms waiting for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.857290  680821 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:03.880136  680821 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392506  680821 pod_ready.go:92] pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.392542  680821 pod_ready.go:81] duration metric: took 1.512370879s waiting for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392556  680821 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402272  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.402382  680821 pod_ready.go:81] duration metric: took 9.816254ms waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402410  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414813  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.414844  680821 pod_ready.go:81] duration metric: took 12.42049ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414861  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424628  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.424651  680821 pod_ready.go:81] duration metric: took 9.782ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424660  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445652  680821 pod_ready.go:92] pod "kube-proxy-f7mgv" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.445679  680821 pod_ready.go:81] duration metric: took 21.012459ms waiting for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445692  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.459758  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.204942723s)
	I0130 22:19:07.459833  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:07.475749  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:19:07.487056  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:19:07.498268  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:19:07.498316  681007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:19:07.552393  681007 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:19:07.552482  681007 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:19:07.703415  681007 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:19:07.703558  681007 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:19:07.703688  681007 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:19:07.929127  681007 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:19:07.931129  681007 out.go:204]   - Generating certificates and keys ...
	I0130 22:19:07.931256  681007 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:19:07.931340  681007 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:19:07.931443  681007 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:19:07.931568  681007 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:19:07.931907  681007 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:19:07.933061  681007 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:19:07.934226  681007 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:19:07.935564  681007 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:19:07.936846  681007 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:19:07.938253  681007 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:19:07.939205  681007 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:19:07.939281  681007 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:19:08.017218  681007 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:19:08.179939  681007 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:19:08.390089  681007 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:19:08.500690  681007 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:19:08.501201  681007 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:19:08.506551  681007 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:19:08.508442  681007 out.go:204]   - Booting up control plane ...
	I0130 22:19:08.508554  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:19:08.508643  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:19:08.509176  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:19:08.528978  681007 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:19:08.529909  681007 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:19:08.530016  681007 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:19:08.657813  681007 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:19:05.846282  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.846316  680821 pod_ready.go:81] duration metric: took 400.615309ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.846329  680821 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.854210  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:10.354894  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:12.358737  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:14.361808  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:16.661056  681007 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003483 seconds
	I0130 22:19:16.663313  681007 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:19:16.682919  681007 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:19:17.218185  681007 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:19:17.218446  681007 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-850803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:19:17.733745  681007 kubeadm.go:322] [bootstrap-token] Using token: oi6eg1.osding0t7oyyeu0p
	I0130 22:19:17.735211  681007 out.go:204]   - Configuring RBAC rules ...
	I0130 22:19:17.735388  681007 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:19:17.744899  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:19:17.754341  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:19:17.758107  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:19:17.761508  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:19:17.765503  681007 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:19:17.781414  681007 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:19:18.095502  681007 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:19:18.190245  681007 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:19:18.190272  681007 kubeadm.go:322] 
	I0130 22:19:18.190348  681007 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:19:18.190360  681007 kubeadm.go:322] 
	I0130 22:19:18.190452  681007 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:19:18.190461  681007 kubeadm.go:322] 
	I0130 22:19:18.190493  681007 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:19:18.190604  681007 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:19:18.190702  681007 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:19:18.190716  681007 kubeadm.go:322] 
	I0130 22:19:18.190800  681007 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:19:18.190835  681007 kubeadm.go:322] 
	I0130 22:19:18.190892  681007 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:19:18.190906  681007 kubeadm.go:322] 
	I0130 22:19:18.190976  681007 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:19:18.191074  681007 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:19:18.191178  681007 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:19:18.191191  681007 kubeadm.go:322] 
	I0130 22:19:18.191293  681007 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:19:18.191416  681007 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:19:18.191438  681007 kubeadm.go:322] 
	I0130 22:19:18.191544  681007 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.191672  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:19:18.191703  681007 kubeadm.go:322] 	--control-plane 
	I0130 22:19:18.191714  681007 kubeadm.go:322] 
	I0130 22:19:18.191814  681007 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:19:18.191824  681007 kubeadm.go:322] 
	I0130 22:19:18.191936  681007 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.192085  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:19:18.192660  681007 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:19:18.192684  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:19:18.192692  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:19:18.194376  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:19:18.195608  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:19:18.244311  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:19:18.285107  681007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:19:18.285193  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.285210  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=default-k8s-diff-port-850803 minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.682930  681007 ops.go:34] apiserver oom_adj: -16
	I0130 22:19:18.683119  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:16.854674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:18.854723  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:19.184109  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:19.683715  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.183529  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.684197  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.184124  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.684022  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.184033  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.683812  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.184203  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.683513  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.857387  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:23.354163  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:25.354683  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:24.184064  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:24.683177  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.183896  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.683522  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.183779  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.683891  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.183468  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.683878  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.183471  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.683793  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.853744  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:30.356959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:29.183658  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:29.683264  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.183311  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.683828  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.183841  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.287952  681007 kubeadm.go:1088] duration metric: took 13.002835585s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:31.287988  681007 kubeadm.go:406] StartCluster complete in 5m12.874624935s
	I0130 22:19:31.288014  681007 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.288132  681007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:31.290435  681007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.290772  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:31.290924  681007 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:31.291004  681007 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291027  681007 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291024  681007 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850803"
	W0130 22:19:31.291035  681007 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:31.291044  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:31.291048  681007 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291053  681007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850803"
	I0130 22:19:31.291078  681007 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291084  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	W0130 22:19:31.291089  681007 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:31.291142  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.291497  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291528  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291577  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291578  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.308624  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0130 22:19:31.308641  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0130 22:19:31.308628  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0130 22:19:31.309140  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309143  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309231  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309662  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309683  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309807  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309825  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309829  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309837  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.310304  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310324  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310621  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.310944  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.310983  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.311193  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.311237  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.314600  681007 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-850803"
	W0130 22:19:31.314619  681007 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:31.314641  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.314888  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.314923  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.331266  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0130 22:19:31.331358  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0130 22:19:31.332259  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332277  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332769  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332791  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.332930  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332949  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.333243  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333307  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333459  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.333534  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.335458  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.337520  681007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:31.335819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.338601  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0130 22:19:31.338925  681007 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.338944  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:31.338969  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.340850  681007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:31.339883  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.341794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.342314  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.342344  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:31.342364  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:31.342381  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.342456  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.342572  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.342787  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.342807  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.342806  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.343515  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.344047  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.344096  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.345163  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346044  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.346073  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346341  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.346515  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.346617  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.346703  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.360658  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0130 22:19:31.361009  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.361631  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.361653  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.362059  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.362284  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.363819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.364079  681007 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.364091  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:31.364104  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.367056  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367482  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.367508  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367705  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.367877  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.368024  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.368159  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.486668  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:31.512324  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.548212  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:31.548241  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:31.565423  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.607291  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:31.607318  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:31.647162  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.647192  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:31.723006  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.913300  681007 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850803" context rescaled to 1 replicas
	I0130 22:19:31.913355  681007 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:31.915323  681007 out.go:177] * Verifying Kubernetes components...
	I0130 22:19:31.916700  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:33.003770  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.517052198s)
	I0130 22:19:33.003803  681007 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:33.533121  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020753837s)
	I0130 22:19:33.533193  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533208  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533167  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967690921s)
	I0130 22:19:33.533306  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533322  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533714  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533727  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533728  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533738  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533747  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533745  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533759  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533769  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533802  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533973  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533987  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.535503  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.535515  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.535531  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.628879  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.628911  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.629222  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.629249  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.629251  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.742264  681007 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.825530161s)
	I0130 22:19:33.742301  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.019251933s)
	I0130 22:19:33.742328  681007 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.742355  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742371  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.742681  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.742701  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.742712  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742736  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.743035  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.743058  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.743072  681007 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:33.745046  681007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:33.746494  681007 addons.go:505] enable addons completed in 2.455579767s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:33.792738  681007 node_ready.go:49] node "default-k8s-diff-port-850803" has status "Ready":"True"
	I0130 22:19:33.792765  681007 node_ready.go:38] duration metric: took 50.422631ms waiting for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.792774  681007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:33.814090  681007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:32.853930  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.854970  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.821685  681007 pod_ready.go:92] pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.821713  681007 pod_ready.go:81] duration metric: took 1.007586687s waiting for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.821725  681007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827824  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.827846  681007 pod_ready.go:81] duration metric: took 6.114329ms waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827855  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835557  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.835577  681007 pod_ready.go:81] duration metric: took 7.716283ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835586  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846707  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.846730  681007 pod_ready.go:81] duration metric: took 11.137144ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846742  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855583  681007 pod_ready.go:92] pod "kube-proxy-9b97q" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:35.855607  681007 pod_ready.go:81] duration metric: took 1.00885903s waiting for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855616  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146642  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:36.146669  681007 pod_ready.go:81] duration metric: took 291.044646ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146679  681007 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:38.154183  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:37.354609  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:39.854928  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:40.154641  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:42.159531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:41.855320  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.354523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.654954  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:47.154579  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:46.355021  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:48.853459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:49.653829  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:51.655608  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:50.853891  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:52.854695  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:55.354018  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:54.154453  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:56.155065  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:58.657247  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:57.853975  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:00.354902  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:01.153907  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:03.654237  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:02.854731  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:05.356880  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:06.155143  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:08.155296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:07.856132  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.356464  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.155799  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.654333  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.853942  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.354885  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.154056  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.154535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.853402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:20.353980  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:19.655422  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.154392  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.354117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.355044  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.155171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.655471  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.854532  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.354204  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.154677  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.654466  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.356403  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:33.356906  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:34.154078  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:36.654298  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:35.853262  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:37.857523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:40.354097  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:39.154049  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:41.654457  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:43.654895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:42.355195  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:44.854639  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:45.655775  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:48.155289  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:47.357754  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:49.855799  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:50.155498  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.655409  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.353449  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:54.354453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:55.155034  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:57.654844  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:56.354612  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:58.854992  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:59.655694  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.656577  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.353141  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:03.353830  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:04.154299  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:06.654312  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.654807  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:05.854650  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.353951  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.354031  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.655061  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.655432  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.354994  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:14.855265  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:15.159097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:17.653783  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:16.857702  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.359396  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.655858  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:22.156091  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:21.854394  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.354360  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.655296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:27.158080  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:26.855014  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.356117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.653580  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:32.154606  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:31.854704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.355484  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.654068  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.654158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.654269  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.357452  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.855223  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:40.655689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.154796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:41.354371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.854228  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:45.155130  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:47.155889  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:46.355266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:48.355485  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:50.362578  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:49.653701  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:51.655019  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:52.854642  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:55.353605  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:54.154411  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:56.654614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:58.660728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:57.854182  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:00.354287  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:01.155135  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:03.654733  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:02.853711  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:04.854845  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:05.656121  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:08.154541  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:07.353888  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:09.354542  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:10.653671  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:12.657917  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:11.854575  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:14.354327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:15.157012  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:17.158822  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:16.354558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:18.355214  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:19.655591  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.154262  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:20.855145  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.855595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:25.354646  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:24.654590  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:26.655050  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:27.357453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.854619  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.154225  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.156000  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:33.654263  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.855106  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:34.354611  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:35.654550  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:37.654631  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:36.856135  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.354424  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.655008  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.657897  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.659483  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.354687  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.354978  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:46.154172  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:48.154643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:45.853374  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:47.854345  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.353899  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.655054  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.655335  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.354795  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.853217  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.655525  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:57.153994  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:56.856987  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.353446  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.157129  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.655835  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.657302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.355499  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.356368  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:06.154373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:08.654373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854404  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854432  680821 pod_ready.go:81] duration metric: took 4m0.008096056s waiting for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:05.854442  680821 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:05.854449  680821 pod_ready.go:38] duration metric: took 4m1.997150293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:05.854467  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:05.854502  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:05.854561  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:05.929032  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:05.929061  680821 cri.go:89] found id: ""
	I0130 22:23:05.929073  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:05.929137  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.934693  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:05.934777  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:05.982312  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:05.982342  680821 cri.go:89] found id: ""
	I0130 22:23:05.982352  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:05.982417  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.986932  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:05.986988  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:06.031983  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.032007  680821 cri.go:89] found id: ""
	I0130 22:23:06.032015  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:06.032073  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.036373  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:06.036429  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:06.084796  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.084829  680821 cri.go:89] found id: ""
	I0130 22:23:06.084840  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:06.084908  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.089120  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:06.089185  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:06.139977  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.139998  680821 cri.go:89] found id: ""
	I0130 22:23:06.140006  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:06.140063  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.144088  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:06.144147  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:06.185075  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.185103  680821 cri.go:89] found id: ""
	I0130 22:23:06.185113  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:06.185164  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.189014  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:06.189070  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:06.223430  680821 cri.go:89] found id: ""
	I0130 22:23:06.223459  680821 logs.go:284] 0 containers: []
	W0130 22:23:06.223469  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:06.223477  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:06.223529  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:06.260048  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.260071  680821 cri.go:89] found id: ""
	I0130 22:23:06.260083  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:06.260141  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.263987  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:06.264013  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:06.315899  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:06.315930  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:06.366903  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:06.366935  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.406395  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:06.406429  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.445937  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:06.445967  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:06.507335  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:06.507368  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.559276  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:06.559313  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.618349  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:06.618390  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.660376  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:06.660410  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:07.080461  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:07.080504  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:07.153607  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.153767  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.176441  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:07.176475  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:07.191016  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:07.191045  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:07.338888  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.338919  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:07.339094  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:07.339109  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.339121  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.339129  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.339142  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:10.656229  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:13.154689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:15.156258  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.654584  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.340518  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:17.358757  680821 api_server.go:72] duration metric: took 4m15.748181205s to wait for apiserver process to appear ...
	I0130 22:23:17.358785  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:17.358824  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:17.358882  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:17.402796  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:17.402819  680821 cri.go:89] found id: ""
	I0130 22:23:17.402827  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:17.402878  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.408452  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:17.408525  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:17.454148  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.454174  680821 cri.go:89] found id: ""
	I0130 22:23:17.454185  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:17.454260  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.458375  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:17.458450  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:17.508924  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:17.508953  680821 cri.go:89] found id: ""
	I0130 22:23:17.508960  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:17.509011  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.512833  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:17.512900  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:17.556821  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:17.556849  680821 cri.go:89] found id: ""
	I0130 22:23:17.556857  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:17.556913  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.561605  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:17.561666  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:17.604962  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.604991  680821 cri.go:89] found id: ""
	I0130 22:23:17.605001  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:17.605078  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.611321  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:17.611395  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:17.651827  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:17.651860  680821 cri.go:89] found id: ""
	I0130 22:23:17.651869  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:17.651918  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.656414  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:17.656472  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:17.696085  680821 cri.go:89] found id: ""
	I0130 22:23:17.696120  680821 logs.go:284] 0 containers: []
	W0130 22:23:17.696130  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:17.696139  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:17.696197  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:17.742145  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.742171  680821 cri.go:89] found id: ""
	I0130 22:23:17.742183  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:17.742248  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.746837  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:17.746861  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:17.864654  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:17.864691  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.917753  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:17.917785  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.958876  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:17.958914  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.997774  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:17.997811  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:18.047778  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:18.047823  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:18.111572  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:18.111621  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:18.489601  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:18.489683  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:18.549905  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:18.549953  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:18.631865  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.632060  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.656777  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:18.656813  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:18.670944  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:18.670973  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:18.726388  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:18.726424  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:18.766317  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766350  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:18.766427  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:18.766446  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.766460  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.766473  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766485  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:20.155531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:22.654846  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:25.153520  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:27.158571  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:28.767516  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:23:28.774562  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:23:28.775796  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:28.775824  680821 api_server.go:131] duration metric: took 11.417031075s to wait for apiserver health ...
	I0130 22:23:28.775834  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:28.775860  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:28.775909  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:28.821439  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:28.821462  680821 cri.go:89] found id: ""
	I0130 22:23:28.821490  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:28.821556  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.826438  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:28.826495  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:28.870075  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:28.870104  680821 cri.go:89] found id: ""
	I0130 22:23:28.870113  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:28.870169  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.874672  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:28.874741  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:28.917733  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:28.917761  680821 cri.go:89] found id: ""
	I0130 22:23:28.917775  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:28.917835  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.925522  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:28.925586  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:28.979761  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:28.979793  680821 cri.go:89] found id: ""
	I0130 22:23:28.979803  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:28.979866  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.983990  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:28.984044  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:29.022516  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.022543  680821 cri.go:89] found id: ""
	I0130 22:23:29.022553  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:29.022604  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.026989  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:29.027069  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:29.065167  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.065194  680821 cri.go:89] found id: ""
	I0130 22:23:29.065205  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:29.065268  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.069436  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:29.069512  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:29.109503  680821 cri.go:89] found id: ""
	I0130 22:23:29.109532  680821 logs.go:284] 0 containers: []
	W0130 22:23:29.109539  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:29.109546  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:29.109599  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:29.158319  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:29.158343  680821 cri.go:89] found id: ""
	I0130 22:23:29.158350  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:29.158437  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.163004  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:29.163025  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:29.540158  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:29.540203  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:29.616783  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:29.616947  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:29.638172  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:29.638207  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:29.761562  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:29.761613  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:29.803930  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:29.803976  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:29.866722  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:29.866763  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.912093  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:29.912125  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.970591  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:29.970624  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:29.984722  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:29.984748  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:30.040548  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:30.040589  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:30.089982  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:30.090027  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:30.128235  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:30.128267  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:30.169872  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.169906  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:30.169982  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:30.169997  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:30.170008  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:30.170026  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.170035  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:29.653518  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:32.155147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:34.653672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:36.155187  681007 pod_ready.go:81] duration metric: took 4m0.008494222s waiting for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:36.155214  681007 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:36.155224  681007 pod_ready.go:38] duration metric: took 4m2.362439314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:36.155243  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:36.155283  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:36.155343  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:36.205838  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:36.205866  681007 cri.go:89] found id: ""
	I0130 22:23:36.205875  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:36.205945  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.210477  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:36.210558  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:36.253110  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:36.253139  681007 cri.go:89] found id: ""
	I0130 22:23:36.253148  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:36.253204  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.257054  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:36.257124  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:36.296932  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.296959  681007 cri.go:89] found id: ""
	I0130 22:23:36.296971  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:36.297034  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.301030  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:36.301080  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:36.339966  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:36.339992  681007 cri.go:89] found id: ""
	I0130 22:23:36.340002  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:36.340062  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.345411  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:36.345474  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:36.389010  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.389031  681007 cri.go:89] found id: ""
	I0130 22:23:36.389039  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:36.389091  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.392885  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:36.392969  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:36.430208  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:36.430228  681007 cri.go:89] found id: ""
	I0130 22:23:36.430237  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:36.430282  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.434507  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:36.434562  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:36.483517  681007 cri.go:89] found id: ""
	I0130 22:23:36.483542  681007 logs.go:284] 0 containers: []
	W0130 22:23:36.483549  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:36.483555  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:36.483613  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:36.543345  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:36.543370  681007 cri.go:89] found id: ""
	I0130 22:23:36.543380  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:36.543445  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.548033  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:36.548064  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:36.630123  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630304  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630456  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630629  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:36.651951  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:36.651990  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:36.667227  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:36.667261  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:36.815056  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:36.815097  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.856960  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:36.856992  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.903856  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:36.903909  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:37.318919  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:37.318964  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:37.368999  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:37.369037  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:37.412698  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:37.412727  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:37.459356  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:37.459389  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:37.509418  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:37.509454  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:37.551349  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:37.551392  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:37.597863  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597892  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:37.597945  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:37.597958  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597964  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597976  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597982  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:37.597988  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597998  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:40.180631  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:23:40.180660  680821 system_pods.go:61] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.180665  680821 system_pods.go:61] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.180669  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.180674  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.180678  680821 system_pods.go:61] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.180683  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.180693  680821 system_pods.go:61] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.180701  680821 system_pods.go:61] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.180710  680821 system_pods.go:74] duration metric: took 11.404869748s to wait for pod list to return data ...
	I0130 22:23:40.180749  680821 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:23:40.184327  680821 default_sa.go:45] found service account: "default"
	I0130 22:23:40.184349  680821 default_sa.go:55] duration metric: took 3.590968ms for default service account to be created ...
	I0130 22:23:40.184356  680821 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:23:40.194745  680821 system_pods.go:86] 8 kube-system pods found
	I0130 22:23:40.194769  680821 system_pods.go:89] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.194774  680821 system_pods.go:89] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.194779  680821 system_pods.go:89] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.194784  680821 system_pods.go:89] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.194788  680821 system_pods.go:89] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.194791  680821 system_pods.go:89] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.194800  680821 system_pods.go:89] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.194805  680821 system_pods.go:89] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.194812  680821 system_pods.go:126] duration metric: took 10.451241ms to wait for k8s-apps to be running ...
	I0130 22:23:40.194817  680821 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:23:40.194866  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:23:40.214067  680821 system_svc.go:56] duration metric: took 19.241185ms WaitForService to wait for kubelet.
	I0130 22:23:40.214091  680821 kubeadm.go:581] duration metric: took 4m38.603520566s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:23:40.214134  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:23:40.217725  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:23:40.217791  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:23:40.217812  680821 node_conditions.go:105] duration metric: took 3.672364ms to run NodePressure ...
	I0130 22:23:40.217827  680821 start.go:228] waiting for startup goroutines ...
	I0130 22:23:40.217840  680821 start.go:233] waiting for cluster config update ...
	I0130 22:23:40.217857  680821 start.go:242] writing updated cluster config ...
	I0130 22:23:40.218114  680821 ssh_runner.go:195] Run: rm -f paused
	I0130 22:23:40.275722  680821 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:23:40.278571  680821 out.go:177] * Done! kubectl is now configured to use "embed-certs-713938" cluster and "default" namespace by default
	I0130 22:23:47.599324  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:47.615605  681007 api_server.go:72] duration metric: took 4m15.702208866s to wait for apiserver process to appear ...
	I0130 22:23:47.615630  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:47.615671  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:47.615722  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:47.660944  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:47.660980  681007 cri.go:89] found id: ""
	I0130 22:23:47.660997  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:47.661051  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.666115  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:47.666180  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:47.709726  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:47.709750  681007 cri.go:89] found id: ""
	I0130 22:23:47.709760  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:47.709821  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.714636  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:47.714691  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:47.760216  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:47.760245  681007 cri.go:89] found id: ""
	I0130 22:23:47.760262  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:47.760323  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.765395  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:47.765450  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:47.815572  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:47.815604  681007 cri.go:89] found id: ""
	I0130 22:23:47.815614  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:47.815674  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.819670  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:47.819729  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:47.858767  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:47.858795  681007 cri.go:89] found id: ""
	I0130 22:23:47.858805  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:47.858865  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.863151  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:47.863276  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:47.911294  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:47.911319  681007 cri.go:89] found id: ""
	I0130 22:23:47.911327  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:47.911387  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.915772  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:47.915852  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:47.952096  681007 cri.go:89] found id: ""
	I0130 22:23:47.952125  681007 logs.go:284] 0 containers: []
	W0130 22:23:47.952136  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:47.952144  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:47.952229  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:47.990137  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:47.990162  681007 cri.go:89] found id: ""
	I0130 22:23:47.990170  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:47.990228  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.994880  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:47.994902  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:48.068521  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068700  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068849  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.069010  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.091781  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:48.091820  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:48.213688  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:48.213724  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:48.264200  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:48.264234  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:48.319751  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:48.319785  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:48.357815  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:48.357846  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:48.406822  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:48.406858  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:48.419822  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:48.419852  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:48.471685  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:48.471719  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:48.508040  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:48.508088  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:48.559268  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:48.559302  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:48.609976  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:48.610007  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:48.966774  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966810  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:48.966900  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:48.966912  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966919  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966927  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966934  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.966939  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966945  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:58.967938  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:23:58.973850  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:23:58.975689  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:58.975713  681007 api_server.go:131] duration metric: took 11.360076324s to wait for apiserver health ...
	I0130 22:23:58.975720  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:58.975745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:58.975793  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:59.023436  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:59.023458  681007 cri.go:89] found id: ""
	I0130 22:23:59.023466  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:59.023514  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.027855  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:59.027916  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:59.067167  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:59.067194  681007 cri.go:89] found id: ""
	I0130 22:23:59.067204  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:59.067266  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.076124  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:59.076191  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:59.115918  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:59.115947  681007 cri.go:89] found id: ""
	I0130 22:23:59.115956  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:59.116011  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.120440  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:59.120489  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:59.165157  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.165185  681007 cri.go:89] found id: ""
	I0130 22:23:59.165194  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:59.165254  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.169774  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:59.169845  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:59.230609  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:59.230640  681007 cri.go:89] found id: ""
	I0130 22:23:59.230650  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:59.230713  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.235563  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:59.235653  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:59.279835  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.279866  681007 cri.go:89] found id: ""
	I0130 22:23:59.279886  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:59.279954  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.284745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:59.284809  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:59.331328  681007 cri.go:89] found id: ""
	I0130 22:23:59.331361  681007 logs.go:284] 0 containers: []
	W0130 22:23:59.331374  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:59.331380  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:59.331432  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:59.370468  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.370493  681007 cri.go:89] found id: ""
	I0130 22:23:59.370501  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:59.370553  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.375047  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:59.375075  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.428263  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:59.428297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.495321  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:59.495356  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.537553  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:59.537590  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:59.915651  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:59.915691  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:59.930178  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:59.930209  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:24:00.070621  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:24:00.070656  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:24:00.111617  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:24:00.111655  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:24:00.156067  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:24:00.156104  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:24:00.206264  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:24:00.206292  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:24:00.282212  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282436  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282642  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282805  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.304194  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:24:00.304223  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:24:00.355473  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:24:00.355508  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:24:00.402962  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403001  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:24:00.403077  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:24:00.403092  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403101  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403114  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403124  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.403136  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403144  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:24:10.411200  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:24:10.411225  681007 system_pods.go:61] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.411231  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.411235  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.411239  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.411242  681007 system_pods.go:61] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.411246  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.411252  681007 system_pods.go:61] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.411258  681007 system_pods.go:61] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.411264  681007 system_pods.go:74] duration metric: took 11.435539762s to wait for pod list to return data ...
	I0130 22:24:10.411274  681007 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:24:10.413887  681007 default_sa.go:45] found service account: "default"
	I0130 22:24:10.413915  681007 default_sa.go:55] duration metric: took 2.635544ms for default service account to be created ...
	I0130 22:24:10.413923  681007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:24:10.420235  681007 system_pods.go:86] 8 kube-system pods found
	I0130 22:24:10.420256  681007 system_pods.go:89] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.420263  681007 system_pods.go:89] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.420271  681007 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.420281  681007 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.420290  681007 system_pods.go:89] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.420301  681007 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.420311  681007 system_pods.go:89] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.420319  681007 system_pods.go:89] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.420327  681007 system_pods.go:126] duration metric: took 6.398195ms to wait for k8s-apps to be running ...
	I0130 22:24:10.420335  681007 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:24:10.420386  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:24:10.438372  681007 system_svc.go:56] duration metric: took 18.027152ms WaitForService to wait for kubelet.
	I0130 22:24:10.438396  681007 kubeadm.go:581] duration metric: took 4m38.525004918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:24:10.438424  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:24:10.441514  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:24:10.441561  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:24:10.441572  681007 node_conditions.go:105] duration metric: took 3.14294ms to run NodePressure ...
	I0130 22:24:10.441583  681007 start.go:228] waiting for startup goroutines ...
	I0130 22:24:10.441591  681007 start.go:233] waiting for cluster config update ...
	I0130 22:24:10.441601  681007 start.go:242] writing updated cluster config ...
	I0130 22:24:10.441855  681007 ssh_runner.go:195] Run: rm -f paused
	I0130 22:24:10.493274  681007 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:24:10.495414  681007 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:14:23 UTC, ends at Tue 2024-01-30 22:24:12 UTC. --
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.063224630Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=ab5699ac-584e-4293-a3fa-c9bea21025a5 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.064406302Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=66069197-4ef9-4ae6-9dc0-17be869ae6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.064888398Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653452064873173,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=66069197-4ef9-4ae6-9dc0-17be869ae6c9 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.065648259Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c35842f4-db41-44c0-ae80-087889155baf name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.065737601Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c35842f4-db41-44c0-ae80-087889155baf name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.065978875Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c35842f4-db41-44c0-ae80-087889155baf name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.108606516Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=126a2203-b98f-49db-929e-9396c5bbcd71 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.108687060Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=126a2203-b98f-49db-929e-9396c5bbcd71 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.109766397Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa2fa335-c92e-48a0-ada9-0e8d6c278ad1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.110315492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653452110293469,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fa2fa335-c92e-48a0-ada9-0e8d6c278ad1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.110968993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c2fca619-0b01-49f3-988c-8b4436a6c08c name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.111011377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c2fca619-0b01-49f3-988c-8b4436a6c08c name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.111488241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c2fca619-0b01-49f3-988c-8b4436a6c08c name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.145060415Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=7c803a06-db13-4cb3-8a0a-00552d0e1b8c name=/runtime.v1.RuntimeService/Version
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.145224710Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=7c803a06-db13-4cb3-8a0a-00552d0e1b8c name=/runtime.v1.RuntimeService/Version
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.146943453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=702d50cc-ec6b-40b5-8915-f14e4150a44c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.147520315Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653452147454904,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=702d50cc-ec6b-40b5-8915-f14e4150a44c name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.148304957Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1b7b2ee8-80ea-442d-a4f3-1d28304341da name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.148349650Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1b7b2ee8-80ea-442d-a4f3-1d28304341da name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.148544335Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1b7b2ee8-80ea-442d-a4f3-1d28304341da name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.166829006Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=5a408e4c-c3cc-4460-8fb9-cc14e1f3ee1c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.167062697Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c00e7e7fbf23814a9c04341cad38d0cafdd459022ac11d22b2dfe9c4dcb5b46e,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-w74c9,Uid:a6e0dfa3-af30-4543-ae29-70ff582bc6ca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652916247336033,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-w74c9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a6e0dfa3-af30-4543-ae29-70ff582bc6ca,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T22:15:15.912078056Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&PodSandboxMetadata{Name:busybox,Uid:a24f5188-6b75-4de9-8a25-84a67697bd40,Namespace
:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652902365758311,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T22:14:58.204047627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-7wr8t,Uid:4b6a3982-1256-41e6-9311-1195746df25a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652902357610268,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T22:14
:58.204049844Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&PodSandboxMetadata{Name:kube-proxy-qm7xx,Uid:4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652900669863703,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-01-30T22:14:58.204041106Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652900061254894,Labels:map[str
ing]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernete
s.io/config.seen: 2024-01-30T22:14:58.204045544Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-912992,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652891167780091,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2024-01-30T22:14:50.28989631Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-912992,Uid:dc1c785143b8b75ceb521c2487b9ea18,Nam
espace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652891155723977,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc1c785143b8b75ceb521c2487b9ea18,kubernetes.io/config.seen: 2024-01-30T22:14:50.289898309Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-912992,Uid:32c367f5dfa3e794388fc594b045f44b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652891127421324,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa
3e794388fc594b045f44b,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 32c367f5dfa3e794388fc594b045f44b,kubernetes.io/config.seen: 2024-01-30T22:14:50.289900286Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-912992,Uid:b39706a67360d65bfa3cf2560791efe9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1706652891110408420,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b39706a67360d65bfa3cf2560791efe9,kubernetes.io/config.seen: 2024-01-30T22:14:50.289883972Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file=
"go-grpc-middleware/chain.go:25" id=5a408e4c-c3cc-4460-8fb9-cc14e1f3ee1c name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.168609263Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=66ea3d33-b984-4131-ab1b-e41729d60618 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.168667762Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=66ea3d33-b984-4131-ab1b-e41729d60618 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Jan 30 22:24:12 old-k8s-version-912992 crio[711]: time="2024-01-30 22:24:12.168894566Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=66ea3d33-b984-4131-ab1b-e41729d60618 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b2c0e91a4312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       1                   8f2e98ca68544       storage-provisioner
	7cf78188f8810       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   7fc3fdb217881       busybox
	61ab23a25f123       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   061e6cd887b02       coredns-5644d7b6d9-7wr8t
	4c48b0d429b38       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   2e2da6c5a177c       kube-proxy-qm7xx
	ddad721f8f253       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   8f2e98ca68544       storage-provisioner
	2123e32c8a2e1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   c502e6abcef1d       etcd-old-k8s-version-912992
	15f24b3dcf08a       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   1789c91b615b4       kube-scheduler-old-k8s-version-912992
	dbd8457575a94       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   89c866f1c9af5       kube-controller-manager-old-k8s-version-912992
	642acc732ea38       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   3ab003d2bc5c3       kube-apiserver-old-k8s-version-912992
	
	
	==> coredns [61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7] <==
	.:53
	2024-01-30T22:04:20.926Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T22:04:20.926Z [INFO] CoreDNS-1.6.2
	2024-01-30T22:04:20.926Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T22:04:22.070Z [INFO] 127.0.0.1:50418 - 56786 "HINFO IN 7115203054942692213.8487583809034102998. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.144213359s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-30T22:15:02.876Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T22:15:02.877Z [INFO] CoreDNS-1.6.2
	2024-01-30T22:15:02.877Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T22:15:02.924Z [INFO] 127.0.0.1:41742 - 55787 "HINFO IN 5107901589387885354.4389539816333725312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046383682s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-912992
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-912992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=old-k8s-version-912992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_04_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:23:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:23:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:23:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:23:28 +0000   Tue, 30 Jan 2024 22:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    old-k8s-version-912992
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 2b328d8096d94a12b7148e9c4c55cb20
	 System UUID:                2b328d80-96d9-4a12-b714-8e9c4c55cb20
	 Boot ID:                    6423afe2-37ad-40e5-b3cd-05296015b92f
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                coredns-5644d7b6d9-7wr8t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     19m
	  kube-system                etcd-old-k8s-version-912992                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-apiserver-old-k8s-version-912992             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-912992    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m14s
	  kube-system                kube-proxy-qm7xx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                kube-scheduler-old-k8s-version-912992             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                metrics-server-74d5856cc6-w74c9                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m57s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)      kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)      kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)      kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                    kube-proxy, old-k8s-version-912992  Starting kube-proxy.
	  Normal  Starting                 9m22s                  kubelet, old-k8s-version-912992     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s (x8 over 9m22s)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s (x7 over 9m22s)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s (x8 over 9m22s)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet, old-k8s-version-912992     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m11s                  kube-proxy, old-k8s-version-912992  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 22:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074838] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.918628] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.461539] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164732] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.497368] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000059] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.527916] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.119104] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.164230] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.122803] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.225326] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +18.102883] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +0.415774] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan30 22:15] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59] <==
	2024-01-30 22:14:53.817077 I | etcdserver: heartbeat = 100ms
	2024-01-30 22:14:53.817091 I | etcdserver: election = 1000ms
	2024-01-30 22:14:53.817208 I | etcdserver: snapshot count = 10000
	2024-01-30 22:14:53.817314 I | etcdserver: advertise client URLs = https://192.168.39.84:2379
	2024-01-30 22:14:53.830205 I | etcdserver: restarting member 9759e6b18ded37f5 in cluster 5f38fc1d36b986e7 at commit index 540
	2024-01-30 22:14:53.830333 I | raft: 9759e6b18ded37f5 became follower at term 2
	2024-01-30 22:14:53.830367 I | raft: newRaft 9759e6b18ded37f5 [peers: [], term: 2, commit: 540, applied: 0, lastindex: 540, lastterm: 2]
	2024-01-30 22:14:53.839996 W | auth: simple token is not cryptographically signed
	2024-01-30 22:14:53.842978 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-30 22:14:53.844741 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 22:14:53.844882 I | embed: listening for metrics on http://192.168.39.84:2381
	2024-01-30 22:14:53.845715 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-30 22:14:53.845922 I | etcdserver/membership: added member 9759e6b18ded37f5 [https://192.168.39.84:2380] to cluster 5f38fc1d36b986e7
	2024-01-30 22:14:53.846046 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-30 22:14:53.846090 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-30 22:14:55.030802 I | raft: 9759e6b18ded37f5 is starting a new election at term 2
	2024-01-30 22:14:55.030898 I | raft: 9759e6b18ded37f5 became candidate at term 3
	2024-01-30 22:14:55.030925 I | raft: 9759e6b18ded37f5 received MsgVoteResp from 9759e6b18ded37f5 at term 3
	2024-01-30 22:14:55.030946 I | raft: 9759e6b18ded37f5 became leader at term 3
	2024-01-30 22:14:55.030963 I | raft: raft.node: 9759e6b18ded37f5 elected leader 9759e6b18ded37f5 at term 3
	2024-01-30 22:14:55.032860 I | etcdserver: published {Name:old-k8s-version-912992 ClientURLs:[https://192.168.39.84:2379]} to cluster 5f38fc1d36b986e7
	2024-01-30 22:14:55.033358 I | embed: ready to serve client requests
	2024-01-30 22:14:55.034526 I | embed: ready to serve client requests
	2024-01-30 22:14:55.036218 I | embed: serving client requests on 192.168.39.84:2379
	2024-01-30 22:14:55.036973 I | embed: serving client requests on 127.0.0.1:2379
	
	
	==> kernel <==
	 22:24:12 up 9 min,  0 users,  load average: 0.17, 0.18, 0.11
	Linux old-k8s-version-912992 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4] <==
	I0130 22:16:00.053358       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:16:00.053511       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:16:00.053600       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:16:00.053633       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:18:00.053989       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:18:00.054236       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:18:00.054289       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:18:00.054300       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:19:59.282975       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:19:59.283186       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:19:59.283260       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:19:59.283269       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:20:59.283546       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:20:59.283802       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:20:59.283891       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:20:59.283925       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:22:59.284365       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:22:59.284708       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:22:59.284841       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:22:59.284888       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933] <==
	E0130 22:17:47.659202       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:17:56.591544       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:18:17.910967       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:18:28.593368       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:18:48.162894       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:19:00.595852       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:19:18.414916       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:19:32.598600       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:19:48.666845       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:20:04.600882       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:20:18.918778       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:20:36.602940       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:20:49.170802       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:21:08.609003       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:21:19.422781       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:21:40.611032       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:21:49.674944       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:22:12.613062       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:22:19.926953       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:22:44.615561       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:22:50.178568       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:23:16.617633       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:23:20.430503       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:23:48.619831       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:23:50.682517       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324] <==
	W0130 22:04:19.978775       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 22:04:20.008477       1 node.go:135] Successfully retrieved node IP: 192.168.39.84
	I0130 22:04:20.008548       1 server_others.go:149] Using iptables Proxier.
	I0130 22:04:20.010459       1 server.go:529] Version: v1.16.0
	I0130 22:04:20.016078       1 config.go:313] Starting service config controller
	I0130 22:04:20.016134       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 22:04:20.016174       1 config.go:131] Starting endpoints config controller
	I0130 22:04:20.016342       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 22:04:20.120480       1 shared_informer.go:204] Caches are synced for service config 
	I0130 22:04:20.120748       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0130 22:15:01.397449       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 22:15:01.499627       1 node.go:135] Successfully retrieved node IP: 192.168.39.84
	I0130 22:15:01.499703       1 server_others.go:149] Using iptables Proxier.
	I0130 22:15:01.523326       1 server.go:529] Version: v1.16.0
	I0130 22:15:01.533037       1 config.go:131] Starting endpoints config controller
	I0130 22:15:01.534607       1 config.go:313] Starting service config controller
	I0130 22:15:01.539592       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 22:15:01.539582       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 22:15:01.640573       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0130 22:15:01.640660       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184] <==
	E0130 22:03:58.868477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:03:58.868567       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:03:58.871241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:03:58.875330       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:03:58.876281       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 22:03:58.877393       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:03:58.878433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:03:58.879748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 22:03:58.882038       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:03:58.883524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:04:17.326850       1 factory.go:585] pod is already present in the activeQ
	E0130 22:04:19.002553       1 scheduler.go:658] error binding pod: Operation cannot be fulfilled on pods/binding "coredns-5644d7b6d9-q2xnt": pod coredns-5644d7b6d9-q2xnt is being deleted, cannot be assigned to a host
	E0130 22:04:19.004162       1 factory.go:561] Error scheduling kube-system/coredns-5644d7b6d9-q2xnt: Operation cannot be fulfilled on pods/binding "coredns-5644d7b6d9-q2xnt": pod coredns-5644d7b6d9-q2xnt is being deleted, cannot be assigned to a host; retrying
	E0130 22:04:19.133984       1 scheduler.go:333] Error updating the condition of the pod kube-system/coredns-5644d7b6d9-q2xnt: Operation cannot be fulfilled on pods "coredns-5644d7b6d9-q2xnt": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:14:52.780316       1 serving.go:319] Generated self-signed cert in-memory
	W0130 22:14:58.274824       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 22:14:58.274869       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:14:58.274879       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 22:14:58.274886       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 22:14:58.285579       1 server.go:143] Version: v1.16.0
	I0130 22:14:58.285807       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0130 22:14:58.297777       1 authorization.go:47] Authorization is disabled
	W0130 22:14:58.297900       1 authentication.go:79] Authentication is disabled
	I0130 22:14:58.297911       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0130 22:14:58.300494       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:14:23 UTC, ends at Tue 2024-01-30 22:24:12 UTC. --
	Jan 30 22:19:48 old-k8s-version-912992 kubelet[1020]: E0130 22:19:48.309796    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:19:50 old-k8s-version-912992 kubelet[1020]: E0130 22:19:50.369798    1020 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 30 22:20:03 old-k8s-version-912992 kubelet[1020]: E0130 22:20:03.310362    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:20:14 old-k8s-version-912992 kubelet[1020]: E0130 22:20:14.309699    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:20:29 old-k8s-version-912992 kubelet[1020]: E0130 22:20:29.309788    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:20:43 old-k8s-version-912992 kubelet[1020]: E0130 22:20:43.310183    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:20:58 old-k8s-version-912992 kubelet[1020]: E0130 22:20:58.326494    1020 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:20:58 old-k8s-version-912992 kubelet[1020]: E0130 22:20:58.326571    1020 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:20:58 old-k8s-version-912992 kubelet[1020]: E0130 22:20:58.326622    1020 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:20:58 old-k8s-version-912992 kubelet[1020]: E0130 22:20:58.326654    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 30 22:21:12 old-k8s-version-912992 kubelet[1020]: E0130 22:21:12.311232    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:21:24 old-k8s-version-912992 kubelet[1020]: E0130 22:21:24.311990    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:21:35 old-k8s-version-912992 kubelet[1020]: E0130 22:21:35.309779    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:21:47 old-k8s-version-912992 kubelet[1020]: E0130 22:21:47.309837    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:22:01 old-k8s-version-912992 kubelet[1020]: E0130 22:22:01.309746    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:22:14 old-k8s-version-912992 kubelet[1020]: E0130 22:22:14.310686    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:22:26 old-k8s-version-912992 kubelet[1020]: E0130 22:22:26.310046    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:22:37 old-k8s-version-912992 kubelet[1020]: E0130 22:22:37.310304    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:22:51 old-k8s-version-912992 kubelet[1020]: E0130 22:22:51.309926    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:23:06 old-k8s-version-912992 kubelet[1020]: E0130 22:23:06.312305    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:23:21 old-k8s-version-912992 kubelet[1020]: E0130 22:23:21.310182    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:23:35 old-k8s-version-912992 kubelet[1020]: E0130 22:23:35.309823    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:23:46 old-k8s-version-912992 kubelet[1020]: E0130 22:23:46.309696    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:24:01 old-k8s-version-912992 kubelet[1020]: E0130 22:24:01.309765    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:24:12 old-k8s-version-912992 kubelet[1020]: E0130 22:24:12.312431    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38] <==
	I0130 22:15:31.621489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:15:31.641489       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:15:31.641580       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:15:49.042563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:15:49.044255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a5a79f0-2c74-47af-97cc-5ecbad74ac28", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248 became leader
	I0130 22:15:49.045868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248!
	I0130 22:15:49.146207       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248!
	
	
	==> storage-provisioner [ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f] <==
	I0130 22:04:20.966303       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:04:20.978788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:04:20.978909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:04:20.989883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:04:20.990140       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71!
	I0130 22:04:20.993675       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a5a79f0-2c74-47af-97cc-5ecbad74ac28", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71 became leader
	I0130 22:04:21.091417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71!
	E0130 22:05:51.635541       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0130 22:15:00.665983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 22:15:30.669667       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-912992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-w74c9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9: exit status 1 (70.145809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-w74c9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 22:19:25.157861  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:19:32.717021  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:20:48.208673  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:21:52.587748  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-023824 -n no-preload-023824
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:28:04.201803207 +0000 UTC m=+5255.602745331
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-023824 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-023824 logs -n 25: (1.711285874s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:09:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:09:08.900187  681007 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:09:08.900447  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900456  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:09:08.900460  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900635  681007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:09:08.901158  681007 out.go:303] Setting JSON to false
	I0130 22:09:08.902121  681007 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10301,"bootTime":1706642248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:09:08.902185  681007 start.go:138] virtualization: kvm guest
	I0130 22:09:08.904443  681007 out.go:177] * [default-k8s-diff-port-850803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:09:08.905904  681007 notify.go:220] Checking for updates...
	I0130 22:09:08.905916  681007 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:09:08.907548  681007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:09:08.908959  681007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:09:08.910401  681007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:09:08.911766  681007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:09:08.913044  681007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:09:08.914682  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:09:08.915157  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.915201  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.929650  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0130 22:09:08.930098  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.930701  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.930721  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.931048  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.931239  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.931458  681007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:09:08.931745  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.931778  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.946395  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0130 22:09:08.946754  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.947305  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.947328  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.947686  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.947865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.982088  681007 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 22:09:08.983300  681007 start.go:298] selected driver: kvm2
	I0130 22:09:08.983312  681007 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.983408  681007 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:09:08.984088  681007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:08.984161  681007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:09:08.997808  681007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:09:08.998205  681007 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:09:08.998285  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:09:08.998305  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:09:08.998323  681007 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85080
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.998554  681007 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:09.000506  681007 out.go:177] * Starting control plane node default-k8s-diff-port-850803 in cluster default-k8s-diff-port-850803
	I0130 22:09:09.417791  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:09.001801  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:09:09.001832  681007 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:09:09.001844  681007 cache.go:56] Caching tarball of preloaded images
	I0130 22:09:09.001930  681007 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:09:09.001942  681007 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:09:09.002074  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:09:09.002279  681007 start.go:365] acquiring machines lock for default-k8s-diff-port-850803: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:09:15.497723  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:18.569709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:24.649709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:27.721682  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:33.801746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:36.873758  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:42.953715  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:46.025774  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:52.105752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:55.177803  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:01.257740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:04.329775  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:10.409748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:13.481709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:19.561742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:22.634236  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:28.713807  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:31.785746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:37.865734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:40.937754  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:47.017740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:50.089744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:56.169767  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:59.241735  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:05.321760  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:08.393763  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:14.473745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:17.545673  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:23.625780  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:26.697711  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:32.777688  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:35.849700  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:41.929752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:45.001744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:51.081733  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:54.153686  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:00.233749  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:03.305724  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:09.385748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:12.457710  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:18.537805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:21.609734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:27.689765  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:30.761718  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:36.841762  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:39.913805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:45.993742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:49.065753  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:55.145745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:58.217703  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.302231  680786 start.go:369] acquired machines lock for "no-preload-023824" in 4m22.656152529s
	I0130 22:13:07.302304  680786 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:07.302314  680786 fix.go:54] fixHost starting: 
	I0130 22:13:07.302790  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:07.302835  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:07.317987  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0130 22:13:07.318451  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:07.318943  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:13:07.318965  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:07.319340  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:07.319538  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:07.319679  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:13:07.321151  680786 fix.go:102] recreateIfNeeded on no-preload-023824: state=Stopped err=<nil>
	I0130 22:13:07.321173  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	W0130 22:13:07.321343  680786 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:07.322929  680786 out.go:177] * Restarting existing kvm2 VM for "no-preload-023824" ...
	I0130 22:13:04.297739  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.299984  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:07.300024  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:13:07.302029  680506 machine.go:91] provisioned docker machine in 4m44.646018806s
	I0130 22:13:07.302108  680506 fix.go:56] fixHost completed within 4m44.666279152s
	I0130 22:13:07.302116  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 4m44.666320503s
	W0130 22:13:07.302153  680506 start.go:694] error starting host: provision: host is not running
	W0130 22:13:07.302282  680506 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 22:13:07.302293  680506 start.go:709] Will try again in 5 seconds ...
	I0130 22:13:07.324101  680786 main.go:141] libmachine: (no-preload-023824) Calling .Start
	I0130 22:13:07.324252  680786 main.go:141] libmachine: (no-preload-023824) Ensuring networks are active...
	I0130 22:13:07.325034  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network default is active
	I0130 22:13:07.325415  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network mk-no-preload-023824 is active
	I0130 22:13:07.325804  680786 main.go:141] libmachine: (no-preload-023824) Getting domain xml...
	I0130 22:13:07.326696  680786 main.go:141] libmachine: (no-preload-023824) Creating domain...
	I0130 22:13:08.499216  680786 main.go:141] libmachine: (no-preload-023824) Waiting to get IP...
	I0130 22:13:08.500483  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.500933  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.501067  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.500931  681630 retry.go:31] will retry after 268.447444ms: waiting for machine to come up
	I0130 22:13:08.771705  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.772073  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.772101  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.772010  681630 retry.go:31] will retry after 235.233391ms: waiting for machine to come up
	I0130 22:13:09.008402  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.008795  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.008826  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.008757  681630 retry.go:31] will retry after 433.981592ms: waiting for machine to come up
	I0130 22:13:09.444576  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.444963  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.445001  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.444900  681630 retry.go:31] will retry after 518.108537ms: waiting for machine to come up
	I0130 22:13:12.306584  680506 start.go:365] acquiring machines lock for old-k8s-version-912992: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:13:09.964605  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.964956  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.964985  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.964919  681630 retry.go:31] will retry after 497.667085ms: waiting for machine to come up
	I0130 22:13:10.464522  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:10.464897  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:10.464930  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:10.464853  681630 retry.go:31] will retry after 918.136538ms: waiting for machine to come up
	I0130 22:13:11.384191  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:11.384665  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:11.384719  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:11.384630  681630 retry.go:31] will retry after 942.595537ms: waiting for machine to come up
	I0130 22:13:12.328976  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:12.329412  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:12.329438  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:12.329365  681630 retry.go:31] will retry after 1.080632129s: waiting for machine to come up
	I0130 22:13:13.411494  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:13.411880  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:13.411905  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:13.411830  681630 retry.go:31] will retry after 1.70851135s: waiting for machine to come up
	I0130 22:13:15.122731  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:15.123212  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:15.123244  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:15.123164  681630 retry.go:31] will retry after 1.890143577s: waiting for machine to come up
	I0130 22:13:17.016347  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:17.016789  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:17.016812  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:17.016745  681630 retry.go:31] will retry after 2.710901352s: waiting for machine to come up
	I0130 22:13:19.731235  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:19.731687  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:19.731717  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:19.731628  681630 retry.go:31] will retry after 3.494667363s: waiting for machine to come up
	I0130 22:13:23.227477  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:23.227894  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:23.227927  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:23.227844  681630 retry.go:31] will retry after 4.45900259s: waiting for machine to come up
	I0130 22:13:28.902379  680821 start.go:369] acquired machines lock for "embed-certs-713938" in 4m43.197815022s
	I0130 22:13:28.902454  680821 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:28.902466  680821 fix.go:54] fixHost starting: 
	I0130 22:13:28.902824  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:28.902863  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:28.922121  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0130 22:13:28.922554  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:28.923019  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:13:28.923040  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:28.923378  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:28.923587  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:28.923730  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:13:28.925000  680821 fix.go:102] recreateIfNeeded on embed-certs-713938: state=Stopped err=<nil>
	I0130 22:13:28.925042  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	W0130 22:13:28.925225  680821 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:28.927620  680821 out.go:177] * Restarting existing kvm2 VM for "embed-certs-713938" ...
	I0130 22:13:27.688611  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689047  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has current primary IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689078  680786 main.go:141] libmachine: (no-preload-023824) Found IP for machine: 192.168.61.232
	I0130 22:13:27.689095  680786 main.go:141] libmachine: (no-preload-023824) Reserving static IP address...
	I0130 22:13:27.689540  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.689585  680786 main.go:141] libmachine: (no-preload-023824) DBG | skip adding static IP to network mk-no-preload-023824 - found existing host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"}
	I0130 22:13:27.689610  680786 main.go:141] libmachine: (no-preload-023824) Reserved static IP address: 192.168.61.232
	I0130 22:13:27.689630  680786 main.go:141] libmachine: (no-preload-023824) Waiting for SSH to be available...
	I0130 22:13:27.689645  680786 main.go:141] libmachine: (no-preload-023824) DBG | Getting to WaitForSSH function...
	I0130 22:13:27.691725  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692037  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.692060  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692196  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH client type: external
	I0130 22:13:27.692236  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa (-rw-------)
	I0130 22:13:27.692288  680786 main.go:141] libmachine: (no-preload-023824) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:27.692305  680786 main.go:141] libmachine: (no-preload-023824) DBG | About to run SSH command:
	I0130 22:13:27.692318  680786 main.go:141] libmachine: (no-preload-023824) DBG | exit 0
	I0130 22:13:27.784900  680786 main.go:141] libmachine: (no-preload-023824) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:27.785232  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetConfigRaw
	I0130 22:13:27.786142  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:27.788581  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.788961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.788997  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.789280  680786 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/config.json ...
	I0130 22:13:27.789457  680786 machine.go:88] provisioning docker machine ...
	I0130 22:13:27.789489  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:27.789691  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.789857  680786 buildroot.go:166] provisioning hostname "no-preload-023824"
	I0130 22:13:27.789879  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.790013  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.792055  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792370  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.792405  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792478  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.792643  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.792790  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.793010  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.793205  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.793814  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.793842  680786 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-023824 && echo "no-preload-023824" | sudo tee /etc/hostname
	I0130 22:13:27.931141  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-023824
	
	I0130 22:13:27.931176  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.933882  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934242  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.934277  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934403  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.934588  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934748  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934917  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.935106  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.935413  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.935438  680786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-023824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-023824/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-023824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:28.067312  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:28.067345  680786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:28.067368  680786 buildroot.go:174] setting up certificates
	I0130 22:13:28.067380  680786 provision.go:83] configureAuth start
	I0130 22:13:28.067389  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:28.067687  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.070381  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070751  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.070787  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070891  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.073317  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073672  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.073704  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073925  680786 provision.go:138] copyHostCerts
	I0130 22:13:28.074050  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:28.074092  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:28.074186  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:28.074311  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:28.074330  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:28.074381  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:28.074474  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:28.074485  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:28.074527  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:28.074604  680786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.no-preload-023824 san=[192.168.61.232 192.168.61.232 localhost 127.0.0.1 minikube no-preload-023824]
	I0130 22:13:28.175428  680786 provision.go:172] copyRemoteCerts
	I0130 22:13:28.175531  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:28.175566  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.178015  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178376  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.178416  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178540  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.178705  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.178860  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.179029  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.265687  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:28.287768  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:28.309363  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:28.331204  680786 provision.go:86] duration metric: configureAuth took 263.811459ms
	I0130 22:13:28.331232  680786 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:28.331476  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:13:28.331568  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.333837  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334205  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.334243  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334421  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.334626  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334804  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334978  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.335183  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.335552  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.335569  680786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:28.648182  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:28.648214  680786 machine.go:91] provisioned docker machine in 858.733436ms
	I0130 22:13:28.648228  680786 start.go:300] post-start starting for "no-preload-023824" (driver="kvm2")
	I0130 22:13:28.648254  680786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:28.648272  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.648633  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:28.648669  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.651616  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.651990  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.652019  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.652200  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.652427  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.652589  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.652737  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.742644  680786 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:28.746791  680786 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:28.746818  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:28.746949  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:28.747065  680786 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:28.747165  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:28.755371  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:28.776917  680786 start.go:303] post-start completed in 128.667778ms
	I0130 22:13:28.776944  680786 fix.go:56] fixHost completed within 21.474623735s
	I0130 22:13:28.776969  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.779261  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779562  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.779591  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779715  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.779938  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780109  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780291  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.780465  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.780778  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.780790  680786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:28.902234  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652808.852489807
	
	I0130 22:13:28.902258  680786 fix.go:206] guest clock: 1706652808.852489807
	I0130 22:13:28.902265  680786 fix.go:219] Guest: 2024-01-30 22:13:28.852489807 +0000 UTC Remote: 2024-01-30 22:13:28.776948754 +0000 UTC m=+284.278530089 (delta=75.541053ms)
	I0130 22:13:28.902285  680786 fix.go:190] guest clock delta is within tolerance: 75.541053ms
	I0130 22:13:28.902291  680786 start.go:83] releasing machines lock for "no-preload-023824", held for 21.600013123s
	I0130 22:13:28.902314  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.902603  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.905058  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905455  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.905516  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905584  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906376  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906578  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906653  680786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:28.906711  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.906863  680786 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:28.906902  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.909484  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909525  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909824  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909856  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909886  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909902  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909952  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910141  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910150  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910347  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910350  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.910620  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:29.028948  680786 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:29.034774  680786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:29.182970  680786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:29.190306  680786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:29.190375  680786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:29.205114  680786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:29.205135  680786 start.go:475] detecting cgroup driver to use...
	I0130 22:13:29.205195  680786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:29.220998  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:29.234283  680786 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:29.234332  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:29.246205  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:29.258169  680786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:29.366756  680786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:29.499821  680786 docker.go:233] disabling docker service ...
	I0130 22:13:29.499908  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:29.513281  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:29.526823  680786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:29.644395  680786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:29.756912  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:29.768811  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:29.785830  680786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:29.785897  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.794702  680786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:29.794755  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.803342  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.812148  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.820802  680786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:29.830052  680786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:29.838334  680786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:29.838402  680786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:29.849789  680786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:29.858298  680786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:29.968180  680786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:30.134232  680786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:30.134309  680786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:30.139054  680786 start.go:543] Will wait 60s for crictl version
	I0130 22:13:30.139130  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.142760  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:30.183071  680786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:30.183175  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.225981  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.276982  680786 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 22:13:28.928924  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Start
	I0130 22:13:28.929139  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring networks are active...
	I0130 22:13:28.929766  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network default is active
	I0130 22:13:28.930145  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network mk-embed-certs-713938 is active
	I0130 22:13:28.930485  680821 main.go:141] libmachine: (embed-certs-713938) Getting domain xml...
	I0130 22:13:28.931095  680821 main.go:141] libmachine: (embed-certs-713938) Creating domain...
	I0130 22:13:30.162733  680821 main.go:141] libmachine: (embed-certs-713938) Waiting to get IP...
	I0130 22:13:30.163807  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.164261  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.164352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.164238  681759 retry.go:31] will retry after 217.071442ms: waiting for machine to come up
	I0130 22:13:30.382542  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.382918  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.382952  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.382899  681759 retry.go:31] will retry after 372.773352ms: waiting for machine to come up
	I0130 22:13:30.278407  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:30.281307  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281730  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:30.281762  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281947  680786 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:30.285873  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:30.299947  680786 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:13:30.300015  680786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:30.342071  680786 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 22:13:30.342094  680786 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:13:30.342198  680786 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.342218  680786 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.342257  680786 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.342278  680786 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.342288  680786 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.342205  680786 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.342265  680786 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 22:13:30.342563  680786 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343800  680786 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 22:13:30.343838  680786 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.343804  680786 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343805  680786 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.343809  680786 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.343801  680786 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.514364  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 22:13:30.529476  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.537822  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.540358  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.546677  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.559021  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.559189  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.579664  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.721137  680786 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 22:13:30.721228  680786 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.721280  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.745682  680786 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 22:13:30.745742  680786 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.745796  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750720  680786 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 22:13:30.750770  680786 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.750821  680786 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 22:13:30.750841  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750854  680786 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.750897  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768135  680786 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 22:13:30.768182  680786 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.768199  680786 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 22:13:30.768243  680786 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.768289  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768303  680786 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 22:13:30.768246  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768384  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.768329  680786 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.768499  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.768527  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.785074  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.785548  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.895706  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.895775  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.895925  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.910469  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910496  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910549  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 22:13:30.910578  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910584  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 22:13:30.910580  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910664  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.910628  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:30.928331  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 22:13:30.928431  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:30.958095  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958123  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958140  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 22:13:30.958176  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958205  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958178  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958249  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 22:13:30.958182  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958271  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958290  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 22:13:33.833277  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.87499883s)
	I0130 22:13:33.833318  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 22:13:33.833336  680786 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.875036585s)
	I0130 22:13:33.833372  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 22:13:33.833366  680786 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:33.833461  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.757262  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.757819  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.757870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.757738  681759 retry.go:31] will retry after 414.437055ms: waiting for machine to come up
	I0130 22:13:31.174434  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.174883  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.174936  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.174831  681759 retry.go:31] will retry after 555.308421ms: waiting for machine to come up
	I0130 22:13:31.731536  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.732150  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.732188  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.732111  681759 retry.go:31] will retry after 484.945442ms: waiting for machine to come up
	I0130 22:13:32.218554  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:32.218989  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:32.219024  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:32.218934  681759 retry.go:31] will retry after 802.660361ms: waiting for machine to come up
	I0130 22:13:33.022920  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:33.023362  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:33.023397  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:33.023298  681759 retry.go:31] will retry after 990.694559ms: waiting for machine to come up
	I0130 22:13:34.015896  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:34.016379  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:34.016407  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:34.016345  681759 retry.go:31] will retry after 1.382435075s: waiting for machine to come up
	I0130 22:13:35.400870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:35.401294  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:35.401327  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:35.401233  681759 retry.go:31] will retry after 1.53975085s: waiting for machine to come up
	I0130 22:13:37.909186  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075686172s)
	I0130 22:13:37.909214  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 22:13:37.909257  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:37.909303  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:39.052225  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.142886078s)
	I0130 22:13:39.052285  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 22:13:39.052326  680786 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:39.052412  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:36.942944  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:36.943539  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:36.943580  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:36.943478  681759 retry.go:31] will retry after 1.888978312s: waiting for machine to come up
	I0130 22:13:38.834886  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:38.835467  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:38.835508  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:38.835393  681759 retry.go:31] will retry after 1.774102713s: waiting for machine to come up
	I0130 22:13:41.133330  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080888409s)
	I0130 22:13:41.133358  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 22:13:41.133383  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:41.133432  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:43.814683  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.681223745s)
	I0130 22:13:43.814716  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 22:13:43.814742  680786 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:43.814779  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:40.611628  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:40.612048  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:40.612083  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:40.611995  681759 retry.go:31] will retry after 2.428322726s: waiting for machine to come up
	I0130 22:13:43.041506  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:43.041916  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:43.041950  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:43.041859  681759 retry.go:31] will retry after 4.531865882s: waiting for machine to come up
	I0130 22:13:48.690103  681007 start.go:369] acquired machines lock for "default-k8s-diff-port-850803" in 4m39.687788229s
	I0130 22:13:48.690177  681007 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:48.690188  681007 fix.go:54] fixHost starting: 
	I0130 22:13:48.690569  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:48.690606  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:48.709730  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0130 22:13:48.710142  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:48.710684  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:13:48.710714  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:48.711070  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:48.711280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:13:48.711446  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:13:48.712865  681007 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850803: state=Stopped err=<nil>
	I0130 22:13:48.712909  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	W0130 22:13:48.713065  681007 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:48.716450  681007 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850803" ...
	I0130 22:13:48.717867  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Start
	I0130 22:13:48.718031  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring networks are active...
	I0130 22:13:48.718700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network default is active
	I0130 22:13:48.719030  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network mk-default-k8s-diff-port-850803 is active
	I0130 22:13:48.719391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Getting domain xml...
	I0130 22:13:48.720046  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Creating domain...
	I0130 22:13:44.761511  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 22:13:44.761571  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:44.761627  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:46.718526  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.956864919s)
	I0130 22:13:46.718569  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 22:13:46.718605  680786 cache_images.go:123] Successfully loaded all cached images
	I0130 22:13:46.718612  680786 cache_images.go:92] LoadImages completed in 16.376507144s
	I0130 22:13:46.718742  680786 ssh_runner.go:195] Run: crio config
	I0130 22:13:46.782286  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:13:46.782311  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:46.782332  680786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:46.782372  680786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-023824 NodeName:no-preload-023824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:46.782544  680786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-023824"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:46.782617  680786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-023824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:46.782674  680786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 22:13:46.792236  680786 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:46.792309  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:46.800361  680786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 22:13:46.816070  680786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 22:13:46.830820  680786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 22:13:46.846493  680786 ssh_runner.go:195] Run: grep 192.168.61.232	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:46.849883  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:46.861414  680786 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824 for IP: 192.168.61.232
	I0130 22:13:46.861442  680786 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:46.861617  680786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:46.861664  680786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:46.861767  680786 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.key
	I0130 22:13:46.861831  680786 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key.e2a9f73e
	I0130 22:13:46.861872  680786 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key
	I0130 22:13:46.862006  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:46.862040  680786 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:46.862051  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:46.862074  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:46.862095  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:46.862118  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:46.862163  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:46.863014  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:46.887626  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:13:46.910152  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:46.931711  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:46.953156  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:46.974390  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:46.996094  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:47.017226  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:47.038317  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:47.059119  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:47.080077  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:47.101123  680786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:47.116152  680786 ssh_runner.go:195] Run: openssl version
	I0130 22:13:47.121529  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:47.130166  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134329  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134391  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.139537  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:47.148157  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:47.156558  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160623  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160682  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.165652  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:47.174350  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:47.183169  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187220  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187245  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.192369  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:47.201432  680786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:47.205518  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:47.210821  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:47.216074  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:47.221255  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:47.226609  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:47.231891  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:47.237220  680786 kubeadm.go:404] StartCluster: {Name:no-preload-023824 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:47.237355  680786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:47.237395  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:47.277488  680786 cri.go:89] found id: ""
	I0130 22:13:47.277561  680786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:47.286193  680786 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:47.286220  680786 kubeadm.go:636] restartCluster start
	I0130 22:13:47.286276  680786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:47.294206  680786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.295888  680786 kubeconfig.go:92] found "no-preload-023824" server: "https://192.168.61.232:8443"
	I0130 22:13:47.299852  680786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:47.307350  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.307401  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.317985  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.808078  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.808141  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.819689  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.308177  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.308241  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.319138  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.808388  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.808448  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.819501  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:49.308165  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.308254  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.319364  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.577701  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578126  680821 main.go:141] libmachine: (embed-certs-713938) Found IP for machine: 192.168.72.213
	I0130 22:13:47.578150  680821 main.go:141] libmachine: (embed-certs-713938) Reserving static IP address...
	I0130 22:13:47.578166  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has current primary IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578564  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.578605  680821 main.go:141] libmachine: (embed-certs-713938) DBG | skip adding static IP to network mk-embed-certs-713938 - found existing host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"}
	I0130 22:13:47.578616  680821 main.go:141] libmachine: (embed-certs-713938) Reserved static IP address: 192.168.72.213
	I0130 22:13:47.578630  680821 main.go:141] libmachine: (embed-certs-713938) Waiting for SSH to be available...
	I0130 22:13:47.578646  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Getting to WaitForSSH function...
	I0130 22:13:47.580757  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581084  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.581120  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581221  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH client type: external
	I0130 22:13:47.581282  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa (-rw-------)
	I0130 22:13:47.581324  680821 main.go:141] libmachine: (embed-certs-713938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:47.581344  680821 main.go:141] libmachine: (embed-certs-713938) DBG | About to run SSH command:
	I0130 22:13:47.581357  680821 main.go:141] libmachine: (embed-certs-713938) DBG | exit 0
	I0130 22:13:47.669006  680821 main.go:141] libmachine: (embed-certs-713938) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:47.669397  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetConfigRaw
	I0130 22:13:47.670084  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.672437  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.672782  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.672806  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.673048  680821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/config.json ...
	I0130 22:13:47.673225  680821 machine.go:88] provisioning docker machine ...
	I0130 22:13:47.673243  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:47.673432  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673608  680821 buildroot.go:166] provisioning hostname "embed-certs-713938"
	I0130 22:13:47.673628  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673766  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.675747  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676016  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.676043  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676178  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.676351  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676484  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676618  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.676743  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.677070  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.677083  680821 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-713938 && echo "embed-certs-713938" | sudo tee /etc/hostname
	I0130 22:13:47.800976  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-713938
	
	I0130 22:13:47.801011  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.803566  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.803876  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.803901  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.804047  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.804235  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804417  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.804699  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.805016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.805033  680821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-713938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-713938/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-713938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:47.928846  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:47.928882  680821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:47.928908  680821 buildroot.go:174] setting up certificates
	I0130 22:13:47.928956  680821 provision.go:83] configureAuth start
	I0130 22:13:47.928976  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.929283  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.931756  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932014  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.932045  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932206  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.934351  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934647  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.934670  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934814  680821 provision.go:138] copyHostCerts
	I0130 22:13:47.934875  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:47.934889  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:47.934963  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:47.935072  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:47.935087  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:47.935120  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:47.935196  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:47.935206  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:47.935234  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:47.935349  680821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.embed-certs-713938 san=[192.168.72.213 192.168.72.213 localhost 127.0.0.1 minikube embed-certs-713938]
	I0130 22:13:47.995543  680821 provision.go:172] copyRemoteCerts
	I0130 22:13:47.995624  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:47.995659  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.998113  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998409  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.998436  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998636  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.998822  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.999004  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.999123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.086454  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:48.108713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:48.131124  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:48.153234  680821 provision.go:86] duration metric: configureAuth took 224.258095ms
	I0130 22:13:48.153269  680821 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:48.153447  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:13:48.153554  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.156268  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156673  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.156705  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156847  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.157070  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157294  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157481  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.157649  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.158119  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.158143  680821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:48.449095  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:48.449131  680821 machine.go:91] provisioned docker machine in 775.890813ms
	I0130 22:13:48.449146  680821 start.go:300] post-start starting for "embed-certs-713938" (driver="kvm2")
	I0130 22:13:48.449161  680821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:48.449185  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.449573  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:48.449605  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.452408  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.452831  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.452866  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.453009  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.453240  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.453416  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.453566  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.539764  680821 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:48.543876  680821 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:48.543905  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:48.543969  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:48.544045  680821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:48.544163  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:48.552947  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:48.573560  680821 start.go:303] post-start completed in 124.400867ms
	I0130 22:13:48.573588  680821 fix.go:56] fixHost completed within 19.671118722s
	I0130 22:13:48.573615  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.576352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576755  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.576777  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576965  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.577170  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577337  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.577708  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.578016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.578029  680821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:48.689910  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652828.640343702
	
	I0130 22:13:48.689937  680821 fix.go:206] guest clock: 1706652828.640343702
	I0130 22:13:48.689948  680821 fix.go:219] Guest: 2024-01-30 22:13:48.640343702 +0000 UTC Remote: 2024-01-30 22:13:48.573593176 +0000 UTC m=+303.018932163 (delta=66.750526ms)
	I0130 22:13:48.690012  680821 fix.go:190] guest clock delta is within tolerance: 66.750526ms
	I0130 22:13:48.690023  680821 start.go:83] releasing machines lock for "embed-certs-713938", held for 19.787596053s
	I0130 22:13:48.690062  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.690367  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:48.692836  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693147  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.693180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693372  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.693895  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694095  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694178  680821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:48.694232  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.694331  680821 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:48.694354  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.696786  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697137  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697205  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697357  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697529  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.697648  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697675  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697706  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.697830  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697910  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.697985  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.698143  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.698307  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.807627  680821 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:48.813332  680821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:48.953919  680821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:48.960672  680821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:48.960744  680821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:48.977684  680821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:48.977702  680821 start.go:475] detecting cgroup driver to use...
	I0130 22:13:48.977766  680821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:48.989811  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:49.001223  680821 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:49.001281  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:49.012649  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:49.024426  680821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:49.130220  680821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:49.248922  680821 docker.go:233] disabling docker service ...
	I0130 22:13:49.248999  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:49.262066  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:49.272736  680821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:49.394001  680821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:49.514043  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:49.526282  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:49.545253  680821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:49.545303  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.554715  680821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:49.554775  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.564248  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.573151  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.582148  680821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:49.591604  680821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:49.599683  680821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:49.599722  680821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:49.611807  680821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:49.622179  680821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:49.745824  680821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:49.924707  680821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:49.924788  680821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:49.930158  680821 start.go:543] Will wait 60s for crictl version
	I0130 22:13:49.930234  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:13:49.933971  680821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:49.973662  680821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:49.973736  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.018705  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.070907  680821 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:13:50.072352  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:50.075100  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075487  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:50.075519  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075750  680821 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:50.079538  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:50.093965  680821 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:13:50.094028  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:50.133425  680821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:13:50.133506  680821 ssh_runner.go:195] Run: which lz4
	I0130 22:13:50.137267  680821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:13:50.141273  680821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:13:50.141299  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:13:49.938197  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting to get IP...
	I0130 22:13:49.939301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939717  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939806  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:49.939711  681876 retry.go:31] will retry after 300.092754ms: waiting for machine to come up
	I0130 22:13:50.241301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241860  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241890  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.241804  681876 retry.go:31] will retry after 313.990905ms: waiting for machine to come up
	I0130 22:13:50.557661  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558161  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.558077  681876 retry.go:31] will retry after 484.197655ms: waiting for machine to come up
	I0130 22:13:51.043815  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044313  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044345  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.044255  681876 retry.go:31] will retry after 595.208415ms: waiting for machine to come up
	I0130 22:13:51.640765  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641244  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641281  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.641207  681876 retry.go:31] will retry after 646.272845ms: waiting for machine to come up
	I0130 22:13:52.288980  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289729  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:52.289599  681876 retry.go:31] will retry after 864.623353ms: waiting for machine to come up
	I0130 22:13:53.155328  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155826  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:53.155750  681876 retry.go:31] will retry after 943.126628ms: waiting for machine to come up
	I0130 22:13:49.807842  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.807941  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.826075  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.308394  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.308476  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.323858  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.807449  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.807538  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.823237  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.307590  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.307684  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.322999  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.807466  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.807551  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.822502  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.308300  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.308431  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.329435  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.808248  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.808379  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.823821  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.308375  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.308462  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.321178  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.807637  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.807748  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.823761  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:54.308223  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.308300  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.320791  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.023827  680821 crio.go:444] Took 1.886590 seconds to copy over tarball
	I0130 22:13:52.023892  680821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:13:55.116587  680821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092664003s)
	I0130 22:13:55.116614  680821 crio.go:451] Took 3.092762 seconds to extract the tarball
	I0130 22:13:55.116644  680821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:13:55.159215  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:55.210233  680821 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:13:55.210263  680821 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:13:55.210344  680821 ssh_runner.go:195] Run: crio config
	I0130 22:13:55.268468  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:13:55.268496  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:55.268519  680821 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:55.268545  680821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-713938 NodeName:embed-certs-713938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:55.268710  680821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-713938"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:55.268801  680821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-713938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:55.268880  680821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:13:55.278244  680821 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:55.278321  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:55.287034  680821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0130 22:13:55.302012  680821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:13:55.318716  680821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0130 22:13:55.335364  680821 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:55.338950  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:55.349780  680821 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938 for IP: 192.168.72.213
	I0130 22:13:55.349814  680821 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:55.350000  680821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:55.350058  680821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:55.350157  680821 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/client.key
	I0130 22:13:55.350242  680821 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key.0982f839
	I0130 22:13:55.350299  680821 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key
	I0130 22:13:55.350469  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:55.350520  680821 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:55.350539  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:55.350577  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:55.350612  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:55.350648  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:55.350707  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:55.351807  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:55.373160  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 22:13:55.394634  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:55.416281  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:55.438713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:55.460324  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:55.481480  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:55.502869  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:55.524520  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:55.547601  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:55.569483  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:55.590741  680821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:54.100347  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:54.100763  681876 retry.go:31] will retry after 1.412406258s: waiting for machine to come up
	I0130 22:13:55.514929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515302  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515362  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:55.515267  681876 retry.go:31] will retry after 1.440442596s: waiting for machine to come up
	I0130 22:13:56.957895  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958367  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958390  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:56.958326  681876 retry.go:31] will retry after 1.996277334s: waiting for machine to come up
	I0130 22:13:54.807936  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.808021  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.824410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.307845  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.307937  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.320645  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.808272  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.808384  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.820051  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.307482  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.307567  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.319410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.808044  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.808167  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.820440  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.308301  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.308409  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.323612  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.323650  680786 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:13:57.323715  680786 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:13:57.323733  680786 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:13:57.323798  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:57.364379  680786 cri.go:89] found id: ""
	I0130 22:13:57.364467  680786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:13:57.380175  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:13:57.390701  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:13:57.390770  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400039  680786 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400071  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:57.546658  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.567155  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020447474s)
	I0130 22:13:58.567192  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.794332  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.875254  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.943890  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:13:58.944000  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:59.444721  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:55.608619  680821 ssh_runner.go:195] Run: openssl version
	I0130 22:13:55.880188  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:55.890762  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895346  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895423  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.900872  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:55.911050  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:55.921117  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925362  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925410  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.930499  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:55.940167  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:55.950284  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954643  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954688  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.959830  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:55.969573  680821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:55.973654  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:55.980878  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:55.988262  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:55.995379  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:56.002387  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:56.007729  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:56.013164  680821 kubeadm.go:404] StartCluster: {Name:embed-certs-713938 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:56.013256  680821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:56.013290  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:56.054588  680821 cri.go:89] found id: ""
	I0130 22:13:56.054670  680821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:56.064691  680821 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:56.064720  680821 kubeadm.go:636] restartCluster start
	I0130 22:13:56.064781  680821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:56.074132  680821 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.075653  680821 kubeconfig.go:92] found "embed-certs-713938" server: "https://192.168.72.213:8443"
	I0130 22:13:56.078677  680821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:56.087919  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.087968  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.099213  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.588843  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.588940  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.601681  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.088185  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.088291  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.103229  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.588880  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.589012  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.604127  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.088751  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.088880  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.100833  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.588147  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.588264  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.604368  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.088571  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.088681  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.104028  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.588569  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.588684  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.602995  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.088596  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.088729  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.104195  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.588883  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.588987  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.605168  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.956101  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956568  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956598  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:58.956511  681876 retry.go:31] will retry after 2.859682959s: waiting for machine to come up
	I0130 22:14:01.819863  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820443  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820476  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:01.820388  681876 retry.go:31] will retry after 2.840054468s: waiting for machine to come up
	I0130 22:13:59.945172  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.444900  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.945042  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.444410  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.486688  680786 api_server.go:72] duration metric: took 2.54280014s to wait for apiserver process to appear ...
	I0130 22:14:01.486719  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:01.486775  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.487585  680786 api_server.go:269] stopped: https://192.168.61.232:8443/healthz: Get "https://192.168.61.232:8443/healthz": dial tcp 192.168.61.232:8443: connect: connection refused
	I0130 22:14:01.987279  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.088999  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.089091  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.104740  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:01.588046  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.588171  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.603186  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.088381  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.088495  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.104148  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.588728  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.588850  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.603782  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.088297  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.088396  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.101192  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.588856  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.588967  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.600516  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.088592  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.088688  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.101572  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.588042  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.588181  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.600890  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.088324  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.088437  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.103896  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.588678  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.588786  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.604329  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.974310  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:04.974343  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:04.974361  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.032790  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.032856  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.032882  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.052788  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.052811  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.487474  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.494053  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.494084  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:05.987783  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.994015  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.994049  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:06.487723  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:06.492959  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:14:06.500169  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:14:06.500208  680786 api_server.go:131] duration metric: took 5.013479999s to wait for apiserver health ...
	I0130 22:14:06.500221  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:14:06.500230  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:06.502253  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:04.661649  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.661976  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.662010  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:04.661932  681876 retry.go:31] will retry after 4.414855002s: waiting for machine to come up
	I0130 22:14:06.503764  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:06.514909  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:06.534344  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:06.546282  680786 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:06.546323  680786 system_pods.go:61] "coredns-76f75df574-cvjdk" [3f6526d5-7bf6-4d51-96bc-9dc6f70ead98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:06.546333  680786 system_pods.go:61] "etcd-no-preload-023824" [89ebff7a-3ac5-4aa7-aab7-9c61e59027a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:06.546352  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [bea4217d-ad4c-4945-ac59-1589976698e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:06.546369  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [4a1866ae-14ce-4132-bc99-225c518ab4bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:06.546394  680786 system_pods.go:61] "kube-proxy-phh5j" [3e662e91-7886-44e7-87a0-4a727011062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:06.546407  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [ad7a7f1c-6aa6-4e16-94d5-e5db7d3e39f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:06.546422  680786 system_pods.go:61] "metrics-server-57f55c9bc5-qfj5x" [13ae9773-8607-43ae-a122-4f97b367a954] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:06.546433  680786 system_pods.go:61] "storage-provisioner" [50dd4d19-5e05-47b7-a11f-5975bc6ef0e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:06.546445  680786 system_pods.go:74] duration metric: took 12.076118ms to wait for pod list to return data ...
	I0130 22:14:06.546458  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:06.549604  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:06.549634  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:06.549645  680786 node_conditions.go:105] duration metric: took 3.179552ms to run NodePressure ...
	I0130 22:14:06.549662  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.858172  680786 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863712  680786 kubeadm.go:787] kubelet initialised
	I0130 22:14:06.863731  680786 kubeadm.go:788] duration metric: took 5.530573ms waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863738  680786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:06.869540  680786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:08.886275  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:10.543927  680506 start.go:369] acquired machines lock for "old-k8s-version-912992" in 58.237287777s
	I0130 22:14:10.543984  680506 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:14:10.543993  680506 fix.go:54] fixHost starting: 
	I0130 22:14:10.544466  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:14:10.544494  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:14:10.563544  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0130 22:14:10.564063  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:14:10.564683  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:14:10.564705  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:14:10.565128  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:14:10.565338  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:10.565526  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:14:10.567290  680506 fix.go:102] recreateIfNeeded on old-k8s-version-912992: state=Stopped err=<nil>
	I0130 22:14:10.567314  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	W0130 22:14:10.567565  680506 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:14:10.569441  680506 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-912992" ...
	I0130 22:14:06.089016  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:06.089138  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:06.101226  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:06.101265  680821 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:06.101276  680821 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:06.101292  680821 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:06.101373  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:06.145816  680821 cri.go:89] found id: ""
	I0130 22:14:06.145935  680821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:06.162118  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:06.174308  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:06.174379  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186134  680821 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186164  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.312544  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.860323  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.068181  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.151741  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.236354  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:07.236461  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:07.737169  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.237398  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.737483  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.237152  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.736646  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.763936  680821 api_server.go:72] duration metric: took 2.527584407s to wait for apiserver process to appear ...
	I0130 22:14:09.763962  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:09.763991  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:09.078352  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078935  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Found IP for machine: 192.168.50.254
	I0130 22:14:09.078985  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has current primary IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078997  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserving static IP address...
	I0130 22:14:09.079366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.079391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | skip adding static IP to network mk-default-k8s-diff-port-850803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"}
	I0130 22:14:09.079411  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Getting to WaitForSSH function...
	I0130 22:14:09.079431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserved static IP address: 192.168.50.254
	I0130 22:14:09.079442  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for SSH to be available...
	I0130 22:14:09.082189  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082612  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.082638  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082892  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH client type: external
	I0130 22:14:09.082917  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa (-rw-------)
	I0130 22:14:09.082982  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:09.082996  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | About to run SSH command:
	I0130 22:14:09.083009  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | exit 0
	I0130 22:14:09.182746  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:09.183304  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetConfigRaw
	I0130 22:14:09.184088  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.187115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187576  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.187606  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187972  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:14:09.188234  681007 machine.go:88] provisioning docker machine ...
	I0130 22:14:09.188262  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:09.188470  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188648  681007 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850803"
	I0130 22:14:09.188670  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188822  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.191366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191769  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.191808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.192148  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192332  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192488  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.192732  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.193245  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.193273  681007 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850803 && echo "default-k8s-diff-port-850803" | sudo tee /etc/hostname
	I0130 22:14:09.344664  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850803
	
	I0130 22:14:09.344700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.348016  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348485  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.348516  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348685  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.348962  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.349505  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.349996  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.350025  681007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:09.490740  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:09.490778  681007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:09.490812  681007 buildroot.go:174] setting up certificates
	I0130 22:14:09.490825  681007 provision.go:83] configureAuth start
	I0130 22:14:09.490844  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.491225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.494577  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495040  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.495085  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495194  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.497931  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498407  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.498433  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498638  681007 provision.go:138] copyHostCerts
	I0130 22:14:09.498702  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:09.498717  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:09.498778  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:09.498898  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:09.498912  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:09.498955  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:09.499039  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:09.499052  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:09.499080  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:09.499147  681007 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850803 san=[192.168.50.254 192.168.50.254 localhost 127.0.0.1 minikube default-k8s-diff-port-850803]
	I0130 22:14:09.749739  681007 provision.go:172] copyRemoteCerts
	I0130 22:14:09.749810  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:09.749848  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.753032  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753498  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.753533  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753727  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.753945  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.754170  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.754364  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:09.851640  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:09.879906  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 22:14:09.907030  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:09.934916  681007 provision.go:86] duration metric: configureAuth took 444.054165ms
	I0130 22:14:09.934954  681007 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:09.935190  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:14:09.935324  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.938507  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.938854  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.938894  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.939068  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.939312  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939517  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.939899  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.940390  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.940421  681007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:10.275894  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:10.275935  681007 machine.go:91] provisioned docker machine in 1.087679661s
	I0130 22:14:10.275950  681007 start.go:300] post-start starting for "default-k8s-diff-port-850803" (driver="kvm2")
	I0130 22:14:10.275965  681007 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:10.275989  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.276387  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:10.276445  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.279676  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280069  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.280115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280364  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.280584  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.280766  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.280923  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.373204  681007 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:10.377609  681007 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:10.377637  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:10.377705  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:10.377773  681007 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:10.377857  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:10.388096  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:10.414529  681007 start.go:303] post-start completed in 138.561717ms
	I0130 22:14:10.414557  681007 fix.go:56] fixHost completed within 21.7243684s
	I0130 22:14:10.414586  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.417282  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417709  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.417741  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417872  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.418063  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418233  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418356  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.418555  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:10.419070  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:10.419086  681007 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:10.543719  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652850.477584158
	
	I0130 22:14:10.543751  681007 fix.go:206] guest clock: 1706652850.477584158
	I0130 22:14:10.543762  681007 fix.go:219] Guest: 2024-01-30 22:14:10.477584158 +0000 UTC Remote: 2024-01-30 22:14:10.414562089 +0000 UTC m=+301.564256760 (delta=63.022069ms)
	I0130 22:14:10.543828  681007 fix.go:190] guest clock delta is within tolerance: 63.022069ms
	I0130 22:14:10.543837  681007 start.go:83] releasing machines lock for "default-k8s-diff-port-850803", held for 21.853682485s
	I0130 22:14:10.543884  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.544172  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:10.547453  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.547833  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.547907  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.548185  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554556  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554902  681007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:10.554975  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.555050  681007 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:10.555093  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.558413  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559108  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559387  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559438  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559764  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.559857  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.560050  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560137  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.560224  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560350  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560579  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560578  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.560760  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.681106  681007 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:10.688790  681007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:10.845108  681007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:10.853366  681007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:10.853540  681007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:10.873299  681007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:10.873326  681007 start.go:475] detecting cgroup driver to use...
	I0130 22:14:10.873426  681007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:10.891563  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:10.908180  681007 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:10.908258  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:10.921344  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:10.935068  681007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:11.036505  681007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:11.151640  681007 docker.go:233] disabling docker service ...
	I0130 22:14:11.151718  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:11.167082  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:11.178680  681007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:11.303325  681007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:11.410097  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:11.426297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:11.452546  681007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:14:11.452634  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.463081  681007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:11.463156  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.472742  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.482828  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.494761  681007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:11.507028  681007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:11.517686  681007 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:11.517742  681007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:11.530301  681007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:11.541975  681007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:11.696623  681007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:11.913271  681007 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:11.913391  681007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:11.919870  681007 start.go:543] Will wait 60s for crictl version
	I0130 22:14:11.919944  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:14:11.926064  681007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:11.975070  681007 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:11.975177  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.033039  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.081059  681007 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:14:10.570784  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Start
	I0130 22:14:10.571067  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring networks are active...
	I0130 22:14:10.571790  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network default is active
	I0130 22:14:10.572160  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network mk-old-k8s-version-912992 is active
	I0130 22:14:10.572697  680506 main.go:141] libmachine: (old-k8s-version-912992) Getting domain xml...
	I0130 22:14:10.573411  680506 main.go:141] libmachine: (old-k8s-version-912992) Creating domain...
	I0130 22:14:11.948333  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting to get IP...
	I0130 22:14:11.949455  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:11.950018  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:11.950060  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:11.949981  682021 retry.go:31] will retry after 276.511731ms: waiting for machine to come up
	I0130 22:14:12.228702  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.229508  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.229544  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.229445  682021 retry.go:31] will retry after 291.918453ms: waiting for machine to come up
	I0130 22:14:12.522882  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.523484  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.523520  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.523451  682021 retry.go:31] will retry after 411.891157ms: waiting for machine to come up
	I0130 22:14:12.082431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:12.085750  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086144  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:12.086175  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086400  681007 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:12.091494  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:12.104832  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:14:12.104904  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:12.160529  681007 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:14:12.160610  681007 ssh_runner.go:195] Run: which lz4
	I0130 22:14:12.165037  681007 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:12.169743  681007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:12.169772  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:14:11.379194  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.394473  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.254742  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.254788  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.254809  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.438140  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.438192  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.438210  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.470956  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.470985  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.764535  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.773346  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:13.773385  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.264393  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.277818  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:14.277878  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.764145  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.769720  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:14:14.778872  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:14.778910  680821 api_server.go:131] duration metric: took 5.01493889s to wait for apiserver health ...
	I0130 22:14:14.778923  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:14:14.778931  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:14.780880  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:14.782682  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:14.798955  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:14.824975  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:14.841121  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:14.841166  680821 system_pods.go:61] "coredns-5dd5756b68-wcncl" [43c0f4bc-1d47-4337-a179-bb27a4164ca5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:14.841177  680821 system_pods.go:61] "etcd-embed-certs-713938" [f8c3bfda-0fca-429b-a0a2-b4fc1d496085] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:14.841196  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [7536531d-a1bd-451b-8530-143f9a41b85c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:14.841209  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [76c2d0eb-823a-41df-91dc-584acb56f81e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:14.841222  680821 system_pods.go:61] "kube-proxy-4c6nn" [253bee90-32a4-4dc0-9db7-bdfa663bcc96] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:14.841233  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [3b4e8324-e074-45ab-b24c-df1bd226e12e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:14.841247  680821 system_pods.go:61] "metrics-server-57f55c9bc5-hcg7l" [25906794-7927-48cf-8f80-52f8a2a68d99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:14.841265  680821 system_pods.go:61] "storage-provisioner" [5820d2a9-be84-42e8-ac25-d4ac1cf22d90] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:14.841275  680821 system_pods.go:74] duration metric: took 16.275602ms to wait for pod list to return data ...
	I0130 22:14:14.841289  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:14.848145  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:14.848183  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:14.848198  680821 node_conditions.go:105] duration metric: took 6.903129ms to run NodePressure ...
	I0130 22:14:14.848221  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:15.186295  680821 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191845  680821 kubeadm.go:787] kubelet initialised
	I0130 22:14:15.191872  680821 kubeadm.go:788] duration metric: took 5.54389ms waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191883  680821 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:15.202037  680821 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:12.937414  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.938094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.938126  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.937994  682021 retry.go:31] will retry after 576.497569ms: waiting for machine to come up
	I0130 22:14:13.515903  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:13.516521  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:13.516547  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:13.516421  682021 retry.go:31] will retry after 519.706227ms: waiting for machine to come up
	I0130 22:14:14.037307  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.037937  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.037967  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.037845  682021 retry.go:31] will retry after 797.706186ms: waiting for machine to come up
	I0130 22:14:14.836997  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.837662  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.837686  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.837561  682021 retry.go:31] will retry after 782.265584ms: waiting for machine to come up
	I0130 22:14:15.621147  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:15.621747  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:15.621779  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:15.621706  682021 retry.go:31] will retry after 1.00093966s: waiting for machine to come up
	I0130 22:14:16.624002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:16.624474  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:16.624506  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:16.624365  682021 retry.go:31] will retry after 1.760162378s: waiting for machine to come up
	I0130 22:14:14.166451  681007 crio.go:444] Took 2.001438 seconds to copy over tarball
	I0130 22:14:14.166549  681007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:17.707309  681007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.540722039s)
	I0130 22:14:17.707346  681007 crio.go:451] Took 3.540858 seconds to extract the tarball
	I0130 22:14:17.707367  681007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:17.751814  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:17.817529  681007 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:14:17.817564  681007 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:14:17.817650  681007 ssh_runner.go:195] Run: crio config
	I0130 22:14:17.882693  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:17.882719  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:17.882745  681007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:17.882777  681007 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850803 NodeName:default-k8s-diff-port-850803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:14:17.882963  681007 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850803"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:17.883060  681007 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 22:14:17.883125  681007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:14:17.895645  681007 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:17.895725  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:17.906009  681007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0130 22:14:17.923445  681007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:17.941439  681007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0130 22:14:17.958729  681007 ssh_runner.go:195] Run: grep 192.168.50.254	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:17.962941  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:17.975030  681007 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803 for IP: 192.168.50.254
	I0130 22:14:17.975065  681007 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:17.975251  681007 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:17.975300  681007 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:17.975377  681007 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.key
	I0130 22:14:17.975436  681007 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key.c40bdd21
	I0130 22:14:17.975471  681007 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key
	I0130 22:14:17.975603  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:17.975634  681007 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:17.975642  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:17.975665  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:17.975689  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:17.975714  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:17.975751  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:17.976423  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:18.003363  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:18.029597  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:18.053558  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:14:18.077340  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:18.100959  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:18.124756  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:18.148266  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:18.171688  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:18.195020  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:18.221728  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:18.245353  681007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:18.262630  681007 ssh_runner.go:195] Run: openssl version
	I0130 22:14:18.268255  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:18.279361  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284264  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284318  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.290374  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:18.301414  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:18.312992  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317776  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317826  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.323596  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:18.334360  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:18.346052  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350871  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350917  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.358340  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:18.371640  681007 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:18.376906  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:18.383780  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:18.390468  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:18.396506  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:18.402525  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:18.407949  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:18.413375  681007 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:18.413454  681007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:18.413546  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:18.460309  681007 cri.go:89] found id: ""
	I0130 22:14:18.460393  681007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:18.474036  681007 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:18.474062  681007 kubeadm.go:636] restartCluster start
	I0130 22:14:18.474153  681007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:18.484682  681007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:18.486004  681007 kubeconfig.go:92] found "default-k8s-diff-port-850803" server: "https://192.168.50.254:8444"
	I0130 22:14:18.488661  681007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:18.499334  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:18.499389  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:18.512812  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:15.878232  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.047391  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:17.215329  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.367292  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:18.386828  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:18.387291  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:18.387324  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:18.387230  682021 retry.go:31] will retry after 1.961289931s: waiting for machine to come up
	I0130 22:14:20.351407  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:20.351939  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:20.351975  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:20.351883  682021 retry.go:31] will retry after 2.41188295s: waiting for machine to come up
	I0130 22:14:18.999791  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.011386  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.025823  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.499386  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.499505  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.513098  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.000365  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.000469  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.017498  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.500160  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.500286  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.517695  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.000275  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.000409  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.017613  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.499881  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.499974  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.516790  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.000448  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.000562  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.014377  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.499900  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.500014  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.513212  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.999725  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.999875  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.013983  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:23.499549  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.499654  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.515308  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.554357  680786 pod_ready.go:92] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.685256  680786 pod_ready.go:81] duration metric: took 12.815676408s waiting for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.685298  680786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705805  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.705843  680786 pod_ready.go:81] duration metric: took 20.535204ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705859  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716827  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.716859  680786 pod_ready.go:81] duration metric: took 10.990465ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716873  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224601  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.224631  680786 pod_ready.go:81] duration metric: took 507.749018ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224648  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231481  680786 pod_ready.go:92] pod "kube-proxy-phh5j" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.231507  680786 pod_ready.go:81] duration metric: took 6.849925ms waiting for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231519  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237347  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.237372  680786 pod_ready.go:81] duration metric: took 5.84531ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237383  680786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.246204  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:24.248275  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:21.709185  680821 pod_ready.go:92] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:21.709226  680821 pod_ready.go:81] duration metric: took 6.507155774s waiting for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:21.709240  680821 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716371  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.716398  680821 pod_ready.go:81] duration metric: took 2.007151614s waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716407  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722781  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.722803  680821 pod_ready.go:81] duration metric: took 6.390258ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722814  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729034  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.729055  680821 pod_ready.go:81] duration metric: took 6.235103ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729063  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737325  680821 pod_ready.go:92] pod "kube-proxy-4c6nn" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.737348  680821 pod_ready.go:81] duration metric: took 8.279273ms waiting for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737361  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.742989  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.743013  680821 pod_ready.go:81] duration metric: took 5.643901ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.743024  680821 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.766642  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:22.767267  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:22.767359  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:22.767247  682021 retry.go:31] will retry after 2.473522194s: waiting for machine to come up
	I0130 22:14:25.242661  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:25.243221  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:25.243246  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:25.243168  682021 retry.go:31] will retry after 4.117858968s: waiting for machine to come up
	I0130 22:14:23.999813  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.999897  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.012879  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.499381  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.499457  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.513834  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.999458  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.999554  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.014779  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.499957  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.500093  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.513275  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.999800  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.999901  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.011952  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.499447  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.499530  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.511962  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.999473  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.999579  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.012316  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:27.499767  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:27.499862  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.511793  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.000036  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.000127  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.012698  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.499393  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.499495  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.511459  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.511494  681007 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:28.511507  681007 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:28.511522  681007 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:28.511593  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:28.550124  681007 cri.go:89] found id: ""
	I0130 22:14:28.550200  681007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:28.566091  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:28.575952  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:28.576019  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584539  681007 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584559  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:28.715666  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:26.744291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.744825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:25.752959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.250440  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:30.251820  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:29.365529  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366106  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has current primary IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366142  680506 main.go:141] libmachine: (old-k8s-version-912992) Found IP for machine: 192.168.39.84
	I0130 22:14:29.366157  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserving static IP address...
	I0130 22:14:29.366732  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.366763  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserved static IP address: 192.168.39.84
	I0130 22:14:29.366789  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | skip adding static IP to network mk-old-k8s-version-912992 - found existing host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"}
	I0130 22:14:29.366805  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting for SSH to be available...
	I0130 22:14:29.366820  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Getting to WaitForSSH function...
	I0130 22:14:29.369195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369625  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.369648  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369851  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH client type: external
	I0130 22:14:29.369899  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa (-rw-------)
	I0130 22:14:29.369956  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:29.369986  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | About to run SSH command:
	I0130 22:14:29.370002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | exit 0
	I0130 22:14:29.469381  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:29.469800  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetConfigRaw
	I0130 22:14:29.470597  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.473253  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.473721  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.473748  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.474114  680506 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/config.json ...
	I0130 22:14:29.474312  680506 machine.go:88] provisioning docker machine ...
	I0130 22:14:29.474333  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:29.474552  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474741  680506 buildroot.go:166] provisioning hostname "old-k8s-version-912992"
	I0130 22:14:29.474767  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474946  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.477297  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477636  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.477677  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477927  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.478188  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478383  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478541  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.478761  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.479265  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.479291  680506 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-912992 && echo "old-k8s-version-912992" | sudo tee /etc/hostname
	I0130 22:14:29.626924  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-912992
	
	I0130 22:14:29.626957  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.630607  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631062  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.631094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631278  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.631514  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631696  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631891  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.632111  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.632505  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.632524  680506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-912992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-912992/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-912992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:29.777390  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:29.777424  680506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:29.777450  680506 buildroot.go:174] setting up certificates
	I0130 22:14:29.777484  680506 provision.go:83] configureAuth start
	I0130 22:14:29.777504  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.777846  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.781195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781632  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.781682  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781860  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.784395  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784744  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.784776  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784895  680506 provision.go:138] copyHostCerts
	I0130 22:14:29.784960  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:29.784973  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:29.785039  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:29.785139  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:29.785148  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:29.785173  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:29.785231  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:29.785240  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:29.785263  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:29.785404  680506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-912992 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube old-k8s-version-912992]
	I0130 22:14:30.047520  680506 provision.go:172] copyRemoteCerts
	I0130 22:14:30.047582  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:30.047607  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.050409  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050757  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.050790  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050992  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.051204  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.051345  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.051517  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.143197  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:30.164424  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 22:14:30.185497  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:30.207694  680506 provision.go:86] duration metric: configureAuth took 430.192351ms
	I0130 22:14:30.207731  680506 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:30.207938  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:14:30.208031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.210616  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.210984  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.211029  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.211184  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.211404  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211560  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211689  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.211838  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.212146  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.212161  680506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:30.548338  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:30.548369  680506 machine.go:91] provisioned docker machine in 1.074040133s
	I0130 22:14:30.548384  680506 start.go:300] post-start starting for "old-k8s-version-912992" (driver="kvm2")
	I0130 22:14:30.548397  680506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:30.548418  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.548802  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:30.548859  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.552482  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.552909  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.552945  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.553163  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.553368  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.553563  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.553702  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.649611  680506 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:30.654369  680506 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:30.654398  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:30.654527  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:30.654606  680506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:30.654692  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:30.664288  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:30.687603  680506 start.go:303] post-start completed in 139.202965ms
	I0130 22:14:30.687635  680506 fix.go:56] fixHost completed within 20.143642101s
	I0130 22:14:30.687663  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.690292  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690742  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.690780  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690973  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.691179  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691381  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691544  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.691751  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.692061  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.692072  680506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:30.827201  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652870.759760061
	
	I0130 22:14:30.827227  680506 fix.go:206] guest clock: 1706652870.759760061
	I0130 22:14:30.827237  680506 fix.go:219] Guest: 2024-01-30 22:14:30.759760061 +0000 UTC Remote: 2024-01-30 22:14:30.687640253 +0000 UTC m=+368.205420110 (delta=72.119808ms)
	I0130 22:14:30.827264  680506 fix.go:190] guest clock delta is within tolerance: 72.119808ms
	I0130 22:14:30.827276  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 20.283317012s
	I0130 22:14:30.827301  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.827604  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:30.830260  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830761  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.830797  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830974  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831570  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831747  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831856  680506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:30.831925  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.832004  680506 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:30.832031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.834970  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835316  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835340  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835377  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835539  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.835794  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835798  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.835816  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835964  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.836028  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836202  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.836228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.836375  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836573  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.931876  680506 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:30.959543  680506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:31.114259  680506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:31.122360  680506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:31.122498  680506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:31.142608  680506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:31.142637  680506 start.go:475] detecting cgroup driver to use...
	I0130 22:14:31.142709  680506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:31.159940  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:31.177310  680506 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:31.177394  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:31.197811  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:31.215942  680506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:31.341800  680506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:31.476217  680506 docker.go:233] disabling docker service ...
	I0130 22:14:31.476303  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:31.493525  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:31.505631  680506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:31.630766  680506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:31.744997  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:31.760432  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:31.778076  680506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 22:14:31.778156  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.788945  680506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:31.789063  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.799691  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.811057  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.822879  680506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:31.835071  680506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:31.844391  680506 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:31.844478  680506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:31.858948  680506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:31.868566  680506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:31.972874  680506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:32.150449  680506 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:32.150536  680506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:32.155130  680506 start.go:543] Will wait 60s for crictl version
	I0130 22:14:32.155192  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:32.158927  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:32.199472  680506 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:32.199568  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.245662  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.308945  680506 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 22:14:32.310311  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:32.313118  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313548  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:32.313596  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313777  680506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:32.317774  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:32.333291  680506 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 22:14:32.333356  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:32.389401  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:32.389494  680506 ssh_runner.go:195] Run: which lz4
	I0130 22:14:32.394618  680506 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:32.399870  680506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:32.399907  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 22:14:29.354779  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.576966  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.649608  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.729908  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:29.730008  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.230637  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.730130  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.231149  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.730722  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.230159  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.258815  681007 api_server.go:72] duration metric: took 2.528908545s to wait for apiserver process to appear ...
	I0130 22:14:32.258850  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:32.258872  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:31.245860  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:33.256817  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:32.753558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.761674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.208834  680506 crio.go:444] Took 1.814253 seconds to copy over tarball
	I0130 22:14:34.208929  680506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:37.177389  680506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.968423546s)
	I0130 22:14:37.177436  680506 crio.go:451] Took 2.968549 seconds to extract the tarball
	I0130 22:14:37.177450  680506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:37.233540  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:37.291641  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:37.291680  680506 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:14:37.291780  680506 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.291799  680506 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.291820  680506 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 22:14:37.291828  680506 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.291904  680506 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.291802  680506 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.292022  680506 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.291788  680506 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293663  680506 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.293740  680506 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293753  680506 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.293662  680506 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.293800  680506 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.293884  680506 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.492113  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.494903  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.495618  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 22:14:37.508190  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.512582  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.514112  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.259261  681007 api_server.go:269] stopped: https://192.168.50.254:8444/healthz: Get "https://192.168.50.254:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:37.259326  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:37.454899  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:37.454935  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:37.759230  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.420961  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.420997  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.421026  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.429934  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.429972  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.759948  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:35.746244  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.748221  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.252371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.752965  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:40.032924  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.032973  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.032996  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.076077  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.076109  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.259372  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.268746  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.268785  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.759307  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.764886  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:14:40.774834  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:40.774863  681007 api_server.go:131] duration metric: took 8.516004362s to wait for apiserver health ...
	I0130 22:14:40.774875  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:40.774883  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:40.776748  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:37.573794  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.589122  680506 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 22:14:37.589177  680506 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.589222  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.653263  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.661867  680506 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 22:14:37.661918  680506 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.661974  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.681759  680506 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 22:14:37.681810  680506 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 22:14:37.681868  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811285  680506 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 22:14:37.811334  680506 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.811398  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811403  680506 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 22:14:37.811441  680506 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.811507  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811522  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.811592  680506 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 22:14:37.811646  680506 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.811684  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 22:14:37.811508  680506 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 22:14:37.811723  680506 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.811694  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811753  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811648  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.828948  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.887304  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 22:14:37.887396  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.924180  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.934685  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 22:14:37.934737  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.934948  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 22:14:37.951228  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 22:14:37.955310  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 22:14:37.988234  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 22:14:38.007649  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 22:14:38.007710  680506 cache_images.go:92] LoadImages completed in 716.017973ms
	W0130 22:14:38.007789  680506 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0130 22:14:38.007920  680506 ssh_runner.go:195] Run: crio config
	I0130 22:14:38.081077  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:38.081112  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:38.081141  680506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:38.081175  680506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-912992 NodeName:old-k8s-version-912992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 22:14:38.082099  680506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-912992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-912992
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.84:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:38.082244  680506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-912992 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:14:38.082342  680506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 22:14:38.091606  680506 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:38.091676  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:38.100424  680506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 22:14:38.117658  680506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:38.134721  680506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 22:14:38.151680  680506 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:38.155416  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:38.169111  680506 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992 for IP: 192.168.39.84
	I0130 22:14:38.169145  680506 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:38.169305  680506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:38.169342  680506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:38.169412  680506 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.key
	I0130 22:14:38.169506  680506 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key.2e1821a6
	I0130 22:14:38.169547  680506 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key
	I0130 22:14:38.169654  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:38.169689  680506 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:38.169702  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:38.169726  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:38.169753  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:38.169776  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:38.169818  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:38.170542  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:38.195046  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:38.217051  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:38.240099  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 22:14:38.266523  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:38.289237  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:38.313011  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:38.336140  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:38.359683  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:38.382658  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:38.407558  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:38.435231  680506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:38.453753  680506 ssh_runner.go:195] Run: openssl version
	I0130 22:14:38.459339  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:38.469159  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474001  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474079  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.479508  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:38.489049  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:38.498644  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503289  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503340  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.508873  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:38.518533  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:38.527871  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532447  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532493  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.538832  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:38.549398  680506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:38.553860  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:38.559537  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:38.565050  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:38.570705  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:38.576386  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:38.581918  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:38.587630  680506 kubeadm.go:404] StartCluster: {Name:old-k8s-version-912992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:38.587746  680506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:38.587803  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:38.630328  680506 cri.go:89] found id: ""
	I0130 22:14:38.630420  680506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:38.642993  680506 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:38.643026  680506 kubeadm.go:636] restartCluster start
	I0130 22:14:38.643095  680506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:38.653192  680506 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:38.654325  680506 kubeconfig.go:92] found "old-k8s-version-912992" server: "https://192.168.39.84:8443"
	I0130 22:14:38.656891  680506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:38.666689  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:38.666762  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:38.678857  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.167457  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.167543  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.179779  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.667279  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.667371  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.679872  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.167509  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.167607  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.181001  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.666977  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.667063  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.679278  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.167767  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.167850  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.182139  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.667595  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.667687  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.681165  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:42.167790  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.167888  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.180444  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.777979  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:40.798593  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:40.826400  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:40.839821  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:40.839847  681007 system_pods.go:61] "coredns-5dd5756b68-t65nr" [1379e1d2-263a-4d35-a630-4e197767b62d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:40.839856  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [e8468358-fd44-4f0e-b54b-13e9a478e259] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:40.839868  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [2e35ea0f-78e5-41b4-965a-c428408f84eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:40.839877  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [669d8c85-812f-4bfc-b3bb-7f5041ca8514] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:40.839890  681007 system_pods.go:61] "kube-proxy-9v5rw" [e97b697b-472b-4b3d-886b-39786c1b3760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:40.839905  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [956ec644-071b-4390-b63e-8cbe9ad2a350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:40.839918  681007 system_pods.go:61] "metrics-server-57f55c9bc5-wlzw4" [3d2bfab3-e9e2-484b-8b8d-779869cbcf9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:40.839927  681007 system_pods.go:61] "storage-provisioner" [e87ce7ad-4933-41b6-8e20-91a4e9ecc45c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:40.839934  681007 system_pods.go:74] duration metric: took 13.512695ms to wait for pod list to return data ...
	I0130 22:14:40.839942  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:40.843711  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:40.843736  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:40.843747  681007 node_conditions.go:105] duration metric: took 3.799992ms to run NodePressure ...
	I0130 22:14:40.843762  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:41.200590  681007 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205872  681007 kubeadm.go:787] kubelet initialised
	I0130 22:14:41.205892  681007 kubeadm.go:788] duration metric: took 5.278409ms waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205899  681007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:41.214192  681007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:43.221105  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.787175  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.243973  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.244009  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.250982  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.751725  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.667181  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.667264  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.679726  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.167750  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.167867  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.179954  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.667584  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.667715  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.680828  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.167107  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.167263  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.183107  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.667674  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.667749  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.680942  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.167589  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.167689  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.180786  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.667715  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.667811  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.681199  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.167671  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.167764  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.181276  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.666810  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.666952  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.680935  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:47.167612  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.167711  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.180385  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.221153  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.221375  681007 pod_ready.go:92] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:47.221398  681007 pod_ready.go:81] duration metric: took 6.00718187s waiting for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:47.221411  681007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:46.244096  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:48.245476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:46.755543  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:49.252337  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.667527  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.667633  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.680519  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.167564  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.167659  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.179815  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.667656  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.667733  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.682679  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.682711  680506 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:48.682722  680506 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:48.682735  680506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:48.682788  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:48.726311  680506 cri.go:89] found id: ""
	I0130 22:14:48.726399  680506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:48.744504  680506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:48.755471  680506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:48.755523  680506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765613  680506 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765636  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:48.886214  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:49.873929  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.090456  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.199471  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.278504  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:50.278604  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:50.779646  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.279488  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.779657  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.829813  680506 api_server.go:72] duration metric: took 1.551314483s to wait for apiserver process to appear ...
	I0130 22:14:51.829852  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:51.829888  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:51.830469  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": dial tcp 192.168.39.84:8443: connect: connection refused
	I0130 22:14:52.330162  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:49.228581  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.230115  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.228169  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.228193  681007 pod_ready.go:81] duration metric: took 6.006776273s waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.228201  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233723  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.233746  681007 pod_ready.go:81] duration metric: took 5.53858ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233754  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238962  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.238983  681007 pod_ready.go:81] duration metric: took 5.221325ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238994  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247623  681007 pod_ready.go:92] pod "kube-proxy-9v5rw" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.247646  681007 pod_ready.go:81] duration metric: took 8.643709ms waiting for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247657  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254079  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.254102  681007 pod_ready.go:81] duration metric: took 6.435694ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254113  681007 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:50.745213  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.245163  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.252956  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.750853  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.331302  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:57.331361  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:55.262286  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.762588  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:55.245641  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.246341  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:58.248157  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.248193  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.248223  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.329248  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.329276  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.330342  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.349249  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.349288  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:58.830998  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.836484  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.836510  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.330646  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.337516  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:59.337559  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.830016  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.836129  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:14:59.846684  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:14:59.846741  680506 api_server.go:131] duration metric: took 8.016878739s to wait for apiserver health ...
	I0130 22:14:59.846760  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:59.846770  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:59.848874  680506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:55.751242  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.755048  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:00.251809  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.850215  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:59.860069  680506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:59.880017  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:59.891300  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:14:59.891330  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:14:59.891335  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:14:59.891340  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:14:59.891345  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Pending
	I0130 22:14:59.891349  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:14:59.891352  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:14:59.891360  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:14:59.891368  680506 system_pods.go:74] duration metric: took 11.331282ms to wait for pod list to return data ...
	I0130 22:14:59.891377  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:59.895522  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:59.895558  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:59.895571  680506 node_conditions.go:105] duration metric: took 4.184167ms to run NodePressure ...
	I0130 22:14:59.895591  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:15:00.214560  680506 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218844  680506 kubeadm.go:787] kubelet initialised
	I0130 22:15:00.218863  680506 kubeadm.go:788] duration metric: took 4.278574ms waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218870  680506 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:00.223310  680506 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.228349  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228371  680506 pod_ready.go:81] duration metric: took 5.033709ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.228380  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228385  680506 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.236353  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236378  680506 pod_ready.go:81] duration metric: took 7.981988ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.236387  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236394  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.244477  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244504  680506 pod_ready.go:81] duration metric: took 8.099653ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.244521  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244531  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.283561  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283590  680506 pod_ready.go:81] duration metric: took 39.047028ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.283602  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283610  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.683495  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683524  680506 pod_ready.go:81] duration metric: took 399.906973ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.683537  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683544  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:01.084061  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084093  680506 pod_ready.go:81] duration metric: took 400.538074ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:01.084107  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084117  680506 pod_ready.go:38] duration metric: took 865.238684ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:01.084149  680506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:15:01.120344  680506 ops.go:34] apiserver oom_adj: -16
	I0130 22:15:01.120372  680506 kubeadm.go:640] restartCluster took 22.477337631s
	I0130 22:15:01.120384  680506 kubeadm.go:406] StartCluster complete in 22.532762257s
	I0130 22:15:01.120408  680506 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.120536  680506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:15:01.123018  680506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.123321  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:15:01.123514  680506 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:15:01.123624  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:15:01.123662  680506 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123683  680506 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123701  680506 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-912992"
	W0130 22:15:01.123709  680506 addons.go:243] addon metrics-server should already be in state true
	I0130 22:15:01.123745  680506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-912992"
	I0130 22:15:01.123769  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124153  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124178  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.124189  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124218  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.123635  680506 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-912992"
	I0130 22:15:01.124295  680506 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-912992"
	W0130 22:15:01.124303  680506 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:15:01.124357  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124693  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124741  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.141006  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0130 22:15:01.141022  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0130 22:15:01.141594  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.141697  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.142122  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142142  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142273  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142297  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142793  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.142837  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0130 22:15:01.142797  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.143291  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.143380  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.143411  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.143758  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.143786  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.144174  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.144210  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.144212  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.144438  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.148328  680506 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-912992"
	W0130 22:15:01.148350  680506 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:15:01.148378  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.148706  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.148734  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.163324  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0130 22:15:01.163720  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0130 22:15:01.164054  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164187  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164638  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164665  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.164806  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164817  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.165086  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165242  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165310  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.165844  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.167686  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.170253  680506 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:15:01.168142  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.169379  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0130 22:15:01.172172  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:15:01.172200  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:15:01.172228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.174608  680506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:15:01.173335  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.175891  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.176824  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.177101  680506 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.177110  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.177116  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:15:01.177134  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.177137  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.177239  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.177855  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.178037  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.181184  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181626  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.181644  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181879  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.182032  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.182215  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.182321  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.182343  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.182745  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.182805  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.183262  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.183296  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.218510  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0130 22:15:01.218955  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.219566  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.219598  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.219976  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.220136  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.221882  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.222143  680506 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.222161  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:15:01.222178  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.225129  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225437  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.225454  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225732  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.225875  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.225948  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.226015  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.362950  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.405756  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:15:01.405829  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:15:01.442804  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.468468  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:15:01.468501  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:15:01.514493  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.514530  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:15:01.531543  680506 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 22:15:01.551886  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.697743  680506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-912992" context rescaled to 1 replicas
	I0130 22:15:01.697805  680506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:15:01.699954  680506 out.go:177] * Verifying Kubernetes components...
	I0130 22:15:01.701746  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078654  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078682  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078704  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078736  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078751  680506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:02.079190  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079200  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079221  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079229  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079231  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079235  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079245  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079246  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079200  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079257  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079266  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079665  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079685  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079695  680506 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-912992"
	I0130 22:15:02.079699  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079719  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.081702  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081725  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.081736  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.081746  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.081969  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081999  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.087366  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.087387  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.087642  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.087661  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.089698  680506 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 22:15:02.091156  680506 addons.go:505] enable addons completed in 967.651598ms: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 22:14:59.767179  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.262656  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.743796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:01.745268  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.245639  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.754252  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:05.250850  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.082265  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:06.582230  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:04.764379  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.764868  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.765839  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.744476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.744978  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.584004  680506 node_ready.go:49] node "old-k8s-version-912992" has status "Ready":"True"
	I0130 22:15:08.584038  680506 node_ready.go:38] duration metric: took 6.50526711s waiting for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:08.584052  680506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:08.591084  680506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595709  680506 pod_ready.go:92] pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.595735  680506 pod_ready.go:81] duration metric: took 4.623355ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595747  680506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600152  680506 pod_ready.go:92] pod "etcd-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.600175  680506 pod_ready.go:81] duration metric: took 4.419847ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600186  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604426  680506 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.604444  680506 pod_ready.go:81] duration metric: took 4.249901ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604454  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608671  680506 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.608685  680506 pod_ready.go:81] duration metric: took 4.224838ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608694  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984275  680506 pod_ready.go:92] pod "kube-proxy-qm7xx" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.984306  680506 pod_ready.go:81] duration metric: took 375.604271ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984321  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384278  680506 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:09.384303  680506 pod_ready.go:81] duration metric: took 399.974439ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384316  680506 pod_ready.go:38] duration metric: took 800.249209ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:09.384331  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:15:09.384383  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:15:09.399639  680506 api_server.go:72] duration metric: took 7.701783762s to wait for apiserver process to appear ...
	I0130 22:15:09.399665  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:15:09.399683  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:15:09.406824  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:15:09.407829  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:15:09.407850  680506 api_server.go:131] duration metric: took 8.177146ms to wait for apiserver health ...
	I0130 22:15:09.407860  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:15:09.584994  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:15:09.585031  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.585039  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.585046  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.585053  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.585059  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.585065  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.585072  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.585080  680506 system_pods.go:74] duration metric: took 177.213093ms to wait for pod list to return data ...
	I0130 22:15:09.585092  680506 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:15:09.784286  680506 default_sa.go:45] found service account: "default"
	I0130 22:15:09.784313  680506 default_sa.go:55] duration metric: took 199.211541ms for default service account to be created ...
	I0130 22:15:09.784322  680506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:15:09.987063  680506 system_pods.go:86] 7 kube-system pods found
	I0130 22:15:09.987094  680506 system_pods.go:89] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.987103  680506 system_pods.go:89] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.987109  680506 system_pods.go:89] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.987114  680506 system_pods.go:89] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.987120  680506 system_pods.go:89] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.987125  680506 system_pods.go:89] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.987131  680506 system_pods.go:89] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.987140  680506 system_pods.go:126] duration metric: took 202.811673ms to wait for k8s-apps to be running ...
	I0130 22:15:09.987150  680506 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:15:09.987206  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:10.001966  680506 system_svc.go:56] duration metric: took 14.805505ms WaitForService to wait for kubelet.
	I0130 22:15:10.001997  680506 kubeadm.go:581] duration metric: took 8.30415043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:15:10.002022  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:15:10.184699  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:15:10.184743  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:15:10.184756  680506 node_conditions.go:105] duration metric: took 182.728475ms to run NodePressure ...
	I0130 22:15:10.184772  680506 start.go:228] waiting for startup goroutines ...
	I0130 22:15:10.184782  680506 start.go:233] waiting for cluster config update ...
	I0130 22:15:10.184796  680506 start.go:242] writing updated cluster config ...
	I0130 22:15:10.185114  680506 ssh_runner.go:195] Run: rm -f paused
	I0130 22:15:10.239744  680506 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 22:15:10.241916  680506 out.go:177] 
	W0130 22:15:10.243307  680506 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 22:15:10.244540  680506 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 22:15:10.245844  680506 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-912992" cluster and "default" namespace by default
	I0130 22:15:07.753442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.250385  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.770107  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.262302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:11.244598  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.744540  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:12.252794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:14.750293  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:15.761573  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:17.764138  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.245719  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.744763  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.751093  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.751144  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:19.766344  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:22.262506  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.243857  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.244633  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.250405  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.752715  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:24.762412  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.260985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:25.744105  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.746611  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:26.250066  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:28.250115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.251911  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:29.262020  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:31.763782  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.243836  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.244064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.244535  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.754073  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:35.249927  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.260099  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.262332  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.262515  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.245173  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.747970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:37.252466  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:39.254833  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:40.264075  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:42.763978  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.244902  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.246545  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.750938  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.751361  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.262599  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.769508  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.743965  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.745769  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:46.250381  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:48.250841  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.262796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.763728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:49.746064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:51.750634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.244634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.750564  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.751105  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.751544  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:55.261060  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:57.262293  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.245111  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:58.246787  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.751681  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.250409  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.762572  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.765901  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:00.744216  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:02.744765  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.750473  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.252199  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.267246  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.764985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:05.252271  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:07.745483  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.252327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:08.750460  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:09.263071  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.764448  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:10.244124  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:12.245643  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.248183  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.254631  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:13.752086  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.262534  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.763532  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.744988  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.746562  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.251554  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.751130  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:19.261302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.262097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.764162  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.243403  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.245825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:20.751443  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.251248  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:26.261011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.263281  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.744554  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:27.744970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.750244  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.249555  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.250246  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.761252  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.762070  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:29.745453  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.243772  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.245396  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.251218  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.752524  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:35.261942  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.264695  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:36.745702  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.244617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.250645  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.251192  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.762454  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.765643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.244956  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.245892  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.750084  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.751479  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:44.262004  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.262160  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.763669  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:45.744222  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:47.745591  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.249746  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.250654  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.252500  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:51.261603  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:53.261672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.244099  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.744215  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.749766  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.750634  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:55.261803  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:57.262915  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.744549  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.745030  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.244809  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.751851  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.258417  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.268254  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.761347  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.761999  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.246996  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.744672  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.750976  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.751083  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:05.763147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.264472  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.244449  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.244796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.250266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.250718  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.761567  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.762159  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.245064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.744572  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.750221  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.750688  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.752051  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:15.261414  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.262083  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.745621  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.243837  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.244825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.250798  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.251873  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.262614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.761873  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.762158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.245432  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.745684  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.750760  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:24.252401  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:25.762960  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.261732  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.246290  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.744375  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.749794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.750363  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:30.262011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:32.762896  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.243646  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.245351  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.251364  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.750995  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.262828  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.763644  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.245530  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.246211  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.752489  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.251704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.261365  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.261786  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:39.745084  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:41.746617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.244143  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.750921  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:45.251115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.262664  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.764196  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.769165  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.744967  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.745930  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:47.751743  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:50.250561  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.261754  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.764405  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.244859  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.744487  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:52.254402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:54.751442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:56.260885  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.261304  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:55.747588  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.244383  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:57.250767  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:59.750343  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.262535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.762755  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.248648  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.744883  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:01.751253  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:03.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:04.763841  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.263079  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:05.244262  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.244758  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.245079  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:06.252399  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:08.750732  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.263723  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.766305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.771997  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.744688  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:14.243700  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:10.751691  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.254909  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.263146  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.764654  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.244291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.250725  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:15.751459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:17.752591  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.251354  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:21.263171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.762025  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.238489  680786 pod_ready.go:81] duration metric: took 4m0.001085938s waiting for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:20.238561  680786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:20.238585  680786 pod_ready.go:38] duration metric: took 4m13.374837351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:20.238635  680786 kubeadm.go:640] restartCluster took 4m32.952408079s
	W0130 22:18:20.238771  680786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:20.238897  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:22.752701  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.743814  680821 pod_ready.go:81] duration metric: took 4m0.000772856s waiting for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:23.743843  680821 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:23.743867  680821 pod_ready.go:38] duration metric: took 4m8.55197109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:23.743901  680821 kubeadm.go:640] restartCluster took 4m27.679173945s
	W0130 22:18:23.743979  680821 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:23.744016  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:25.762818  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:27.766206  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:30.262706  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:32.263895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:33.696118  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.457184259s)
	I0130 22:18:33.696246  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:33.709756  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:33.719095  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:33.727249  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:33.727304  680786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:33.783803  680786 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0130 22:18:33.783934  680786 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:33.947330  680786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:33.947473  680786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:33.947594  680786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:34.185129  680786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:34.186847  680786 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:34.186958  680786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:34.187047  680786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:34.187130  680786 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:34.187254  680786 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:34.187590  680786 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:34.188233  680786 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:34.188591  680786 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:34.189435  680786 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:34.189737  680786 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:34.190284  680786 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:34.190677  680786 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:34.190788  680786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:34.357057  680786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:34.468135  680786 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0130 22:18:34.785137  680786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:34.900902  680786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:34.973785  680786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:34.974693  680786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:34.977481  680786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:37.518038  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.773993992s)
	I0130 22:18:37.518130  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:37.533148  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:37.542965  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:37.552859  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:37.552915  680821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:37.614837  680821 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:18:37.614964  680821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:37.783252  680821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:37.783431  680821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:37.783598  680821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:38.009789  680821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:38.011805  680821 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:38.011921  680821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:38.012010  680821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:38.012140  680821 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:38.012573  680821 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:38.013135  680821 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:38.014103  680821 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:38.015459  680821 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:38.016522  680821 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:38.017879  680821 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:38.018669  680821 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:38.019318  680821 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:38.019416  680821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:38.190496  680821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:38.487122  680821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:38.567485  680821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:38.764572  680821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:38.765081  680821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:38.771540  680821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:34.761686  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:36.763512  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:38.772838  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:34.979275  680786 out.go:204]   - Booting up control plane ...
	I0130 22:18:34.979394  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:34.979502  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:34.979687  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:35.000161  680786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:35.001100  680786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:35.001180  680786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:35.143762  680786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:38.773177  680821 out.go:204]   - Booting up control plane ...
	I0130 22:18:38.773326  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:38.773447  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:38.774160  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:38.793263  680821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:38.793414  680821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:38.793489  680821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:38.942605  680821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:41.263027  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.264305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.147099  680786 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003222 seconds
	I0130 22:18:43.165914  680786 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:43.183810  680786 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:43.729066  680786 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:43.729309  680786 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-023824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:44.247224  680786 kubeadm.go:322] [bootstrap-token] Using token: 8v59zo.bsn08ubvfg01lew3
	I0130 22:18:44.248930  680786 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:44.249075  680786 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:44.256127  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:44.265628  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:44.269906  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:44.278100  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:44.283097  680786 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:44.301902  680786 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:44.542713  680786 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:44.665337  680786 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:44.665367  680786 kubeadm.go:322] 
	I0130 22:18:44.665448  680786 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:44.665463  680786 kubeadm.go:322] 
	I0130 22:18:44.665573  680786 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:44.665583  680786 kubeadm.go:322] 
	I0130 22:18:44.665660  680786 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:44.665761  680786 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:44.665830  680786 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:44.665840  680786 kubeadm.go:322] 
	I0130 22:18:44.665909  680786 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:44.665927  680786 kubeadm.go:322] 
	I0130 22:18:44.665994  680786 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:44.666003  680786 kubeadm.go:322] 
	I0130 22:18:44.666084  680786 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:44.666220  680786 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:44.666324  680786 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:44.666349  680786 kubeadm.go:322] 
	I0130 22:18:44.666456  680786 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:44.666544  680786 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:44.666551  680786 kubeadm.go:322] 
	I0130 22:18:44.666646  680786 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.666764  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:44.666789  680786 kubeadm.go:322] 	--control-plane 
	I0130 22:18:44.666795  680786 kubeadm.go:322] 
	I0130 22:18:44.666898  680786 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:44.666906  680786 kubeadm.go:322] 
	I0130 22:18:44.667000  680786 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.667121  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:44.667741  680786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:44.667773  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:18:44.667784  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:44.669613  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:47.444081  680821 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502071 seconds
	I0130 22:18:47.444241  680821 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:47.470140  680821 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:48.014141  680821 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:48.014385  680821 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-713938 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:48.528168  680821 kubeadm.go:322] [bootstrap-token] Using token: 5j3t7l.lolt26xy60ozf3ca
	I0130 22:18:45.765205  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.261716  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.529669  680821 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:48.529807  680821 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:48.544442  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:48.552536  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:48.555846  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:48.559711  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:48.563810  680821 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:48.580095  680821 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:48.820236  680821 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:48.950911  680821 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:48.951833  680821 kubeadm.go:322] 
	I0130 22:18:48.951927  680821 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:48.951958  680821 kubeadm.go:322] 
	I0130 22:18:48.952042  680821 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:48.952063  680821 kubeadm.go:322] 
	I0130 22:18:48.952089  680821 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:48.952144  680821 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:48.952190  680821 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:48.952196  680821 kubeadm.go:322] 
	I0130 22:18:48.952267  680821 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:48.952287  680821 kubeadm.go:322] 
	I0130 22:18:48.952346  680821 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:48.952356  680821 kubeadm.go:322] 
	I0130 22:18:48.952439  680821 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:48.952554  680821 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:48.952661  680821 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:48.952671  680821 kubeadm.go:322] 
	I0130 22:18:48.952805  680821 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:48.952894  680821 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:48.952906  680821 kubeadm.go:322] 
	I0130 22:18:48.953001  680821 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953139  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:48.953177  680821 kubeadm.go:322] 	--control-plane 
	I0130 22:18:48.953189  680821 kubeadm.go:322] 
	I0130 22:18:48.953296  680821 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:48.953306  680821 kubeadm.go:322] 
	I0130 22:18:48.953413  680821 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953555  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:48.954606  680821 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:48.954659  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:18:48.954677  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:48.956379  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:44.671035  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:44.696043  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:44.785738  680786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:44.785867  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.785894  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=no-preload-023824 minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.887327  680786 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:45.135926  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:45.636755  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.136406  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.636077  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.136080  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.636924  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.136830  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.636945  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.136038  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.957922  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:48.974487  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:49.035551  680821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=embed-certs-713938 minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.085285  680821 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:49.366490  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.866648  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.366789  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.761888  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:52.765352  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:53.254549  681007 pod_ready.go:81] duration metric: took 4m0.000414494s waiting for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:53.254593  681007 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:53.254623  681007 pod_ready.go:38] duration metric: took 4m12.048715105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:53.254662  681007 kubeadm.go:640] restartCluster took 4m34.780590329s
	W0130 22:18:53.254758  681007 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:53.254793  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:49.635946  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.136681  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.636090  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.136427  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.636232  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.136032  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.636639  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.136839  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.636957  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.136140  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.866857  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.367211  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.867291  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.366659  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.867351  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.366925  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.867180  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.366846  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.866651  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.366588  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.636246  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.136047  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.636970  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.136258  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.636239  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.136269  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.262159  680786 kubeadm.go:1088] duration metric: took 12.476361074s to wait for elevateKubeSystemPrivileges.
	I0130 22:18:57.262235  680786 kubeadm.go:406] StartCluster complete in 5m10.025020914s
	I0130 22:18:57.262288  680786 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.262417  680786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:18:57.265204  680786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.265504  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:18:57.265655  680786 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:18:57.265746  680786 addons.go:69] Setting storage-provisioner=true in profile "no-preload-023824"
	I0130 22:18:57.265769  680786 addons.go:234] Setting addon storage-provisioner=true in "no-preload-023824"
	W0130 22:18:57.265784  680786 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:18:57.265774  680786 addons.go:69] Setting default-storageclass=true in profile "no-preload-023824"
	I0130 22:18:57.265812  680786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-023824"
	I0130 22:18:57.265838  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:18:57.265817  680786 addons.go:69] Setting metrics-server=true in profile "no-preload-023824"
	I0130 22:18:57.265880  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.265898  680786 addons.go:234] Setting addon metrics-server=true in "no-preload-023824"
	W0130 22:18:57.265925  680786 addons.go:243] addon metrics-server should already be in state true
	I0130 22:18:57.265973  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266315  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266349  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266376  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266416  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.286273  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0130 22:18:57.286366  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0130 22:18:57.286463  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0130 22:18:57.287691  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287692  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287851  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.288302  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288323  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288428  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288439  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288511  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288524  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288850  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.288897  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289215  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289405  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289437  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289685  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289719  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289792  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.293877  680786 addons.go:234] Setting addon default-storageclass=true in "no-preload-023824"
	W0130 22:18:57.293899  680786 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:18:57.293928  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.294325  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.294356  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.310259  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0130 22:18:57.310765  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.311270  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.311289  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.311818  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.312317  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.313547  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0130 22:18:57.314105  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.314665  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.314686  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.314752  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.316570  680786 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:18:57.315368  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.317812  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:18:57.317835  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:18:57.317858  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.318173  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.318194  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.321603  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.321671  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0130 22:18:57.321961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.322001  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.322280  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.322296  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.322491  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	W0130 22:18:57.322819  680786 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-023824" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0130 22:18:57.322843  680786 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:18:57.322866  680786 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:18:57.324267  680786 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:57.323003  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.323084  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.325567  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.325663  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:57.325909  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.326903  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.327113  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.329169  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.331160  680786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:18:57.332481  680786 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.332500  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:18:57.332519  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.336038  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336525  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.336546  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336746  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.336901  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.337031  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.337256  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.338027  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0130 22:18:57.338387  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.339078  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.339097  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.339406  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.339628  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.341385  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.341687  680786 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.341705  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:18:57.341725  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.344745  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345159  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.345180  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345408  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.345613  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.349708  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.349906  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.525974  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.531582  680786 node_ready.go:35] waiting up to 6m0s for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.532157  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:18:57.546542  680786 node_ready.go:49] node "no-preload-023824" has status "Ready":"True"
	I0130 22:18:57.546575  680786 node_ready.go:38] duration metric: took 14.926402ms waiting for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.546592  680786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:57.573983  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:18:57.589817  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:18:57.589854  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:18:57.684894  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:18:57.684926  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:18:57.715247  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.726490  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:57.726521  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:18:57.824368  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:58.842258  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.316238822s)
	I0130 22:18:58.842310  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842327  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842341  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.310137299s)
	I0130 22:18:58.842386  680786 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0130 22:18:58.842447  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.127164198s)
	I0130 22:18:58.842474  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842486  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842830  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842870  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842893  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842898  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842900  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842921  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842924  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842931  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842937  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842948  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.843222  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843243  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.843456  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843469  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.885944  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.885978  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.886311  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.888268  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.888288  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228029  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.403587938s)
	I0130 22:18:59.228205  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228233  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.228672  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.228714  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.228738  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228749  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228762  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.229119  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.229182  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.229197  680786 addons.go:470] Verifying addon metrics-server=true in "no-preload-023824"
	I0130 22:18:59.229126  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.230815  680786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:18:59.232158  680786 addons.go:505] enable addons completed in 1.966513856s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:18:55.867390  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.367181  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.866689  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.366578  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.867406  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.366702  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.867537  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.366860  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.867263  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.366507  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.866976  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.994251  680821 kubeadm.go:1088] duration metric: took 11.958653294s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:00.994309  680821 kubeadm.go:406] StartCluster complete in 5m4.981146882s
	I0130 22:19:00.994337  680821 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.994437  680821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:00.997310  680821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.997649  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:00.997866  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:00.997819  680821 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:00.997932  680821 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-713938"
	I0130 22:19:00.997951  680821 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-713938"
	W0130 22:19:00.997962  680821 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:00.997978  680821 addons.go:69] Setting metrics-server=true in profile "embed-certs-713938"
	I0130 22:19:00.997979  680821 addons.go:69] Setting default-storageclass=true in profile "embed-certs-713938"
	I0130 22:19:00.997994  680821 addons.go:234] Setting addon metrics-server=true in "embed-certs-713938"
	W0130 22:19:00.998002  680821 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:00.998009  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998012  680821 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-713938"
	I0130 22:19:00.998035  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998425  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998450  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.018726  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0130 22:19:01.018744  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0130 22:19:01.018754  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0130 22:19:01.019224  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019255  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019329  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019860  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.019890  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020012  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020062  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.020311  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020379  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020530  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.020984  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.021001  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021030  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.021533  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021581  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.024902  680821 addons.go:234] Setting addon default-storageclass=true in "embed-certs-713938"
	W0130 22:19:01.024926  680821 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:01.024955  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:01.025333  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.025372  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.041760  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0130 22:19:01.043510  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0130 22:19:01.043937  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.043980  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.044434  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044454  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.044864  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044902  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.045102  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045331  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045686  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.045730  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.045952  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.049065  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0130 22:19:01.049076  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.051101  680821 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:01.049716  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.052918  680821 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.052937  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:01.052959  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.055109  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.055135  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.057586  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.057591  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057611  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.057625  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057656  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.057829  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.057831  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.057974  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.058123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.063470  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.065048  680821 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:01.066385  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:01.066404  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:01.066425  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.066427  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I0130 22:19:01.067271  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.067806  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.067834  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.068198  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.068403  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.069684  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070069  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.070133  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.070162  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070347  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.070369  680821 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.070381  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:01.070402  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.073308  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073914  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.073945  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073978  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074155  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074207  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.074325  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.074346  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074441  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074534  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.210631  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.237088  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.307032  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:01.307130  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:01.368366  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:01.368405  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:01.388184  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:01.443355  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.443414  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:01.558399  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.610498  680821 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-713938" context rescaled to 1 replicas
	I0130 22:19:01.610545  680821 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:01.612750  680821 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:59.584739  680786 pod_ready.go:102] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:01.089751  680786 pod_ready.go:92] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.089826  680786 pod_ready.go:81] duration metric: took 3.515759187s waiting for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.089853  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098560  680786 pod_ready.go:92] pod "coredns-76f75df574-znj8f" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.098645  680786 pod_ready.go:81] duration metric: took 8.774285ms waiting for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098671  680786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.106943  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.107036  680786 pod_ready.go:81] duration metric: took 8.345837ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.107062  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120384  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.120413  680786 pod_ready.go:81] duration metric: took 13.332445ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120427  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129739  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.129825  680786 pod_ready.go:81] duration metric: took 9.387442ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129850  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282077  680786 pod_ready.go:92] pod "kube-proxy-8rn6v" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.282110  680786 pod_ready.go:81] duration metric: took 1.152243055s waiting for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282123  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681191  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.681221  680786 pod_ready.go:81] duration metric: took 399.089453ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681232  680786 pod_ready.go:38] duration metric: took 5.134627161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:02.681249  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:19:02.681313  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:19:02.695239  680786 api_server.go:72] duration metric: took 5.372338357s to wait for apiserver process to appear ...
	I0130 22:19:02.695265  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:19:02.695291  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:19:02.700070  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:19:02.701235  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:19:02.701266  680786 api_server.go:131] duration metric: took 5.988974ms to wait for apiserver health ...
	I0130 22:19:02.701279  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:19:02.899520  680786 system_pods.go:59] 9 kube-system pods found
	I0130 22:19:02.899558  680786 system_pods.go:61] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:02.899565  680786 system_pods.go:61] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:02.899572  680786 system_pods.go:61] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:02.899579  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:02.899586  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:02.899592  680786 system_pods.go:61] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:02.899599  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:02.899610  680786 system_pods.go:61] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:02.899626  680786 system_pods.go:61] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:02.899637  680786 system_pods.go:74] duration metric: took 198.349705ms to wait for pod list to return data ...
	I0130 22:19:02.899649  680786 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:19:03.080624  680786 default_sa.go:45] found service account: "default"
	I0130 22:19:03.080668  680786 default_sa.go:55] duration metric: took 181.003649ms for default service account to be created ...
	I0130 22:19:03.080681  680786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:19:03.285004  680786 system_pods.go:86] 9 kube-system pods found
	I0130 22:19:03.285040  680786 system_pods.go:89] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:03.285048  680786 system_pods.go:89] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:03.285056  680786 system_pods.go:89] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:03.285063  680786 system_pods.go:89] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:03.285069  680786 system_pods.go:89] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:03.285073  680786 system_pods.go:89] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:03.285078  680786 system_pods.go:89] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:03.285089  680786 system_pods.go:89] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:03.285097  680786 system_pods.go:89] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:03.285107  680786 system_pods.go:126] duration metric: took 204.418927ms to wait for k8s-apps to be running ...
	I0130 22:19:03.285117  680786 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:19:03.285172  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.303077  680786 system_svc.go:56] duration metric: took 17.949308ms WaitForService to wait for kubelet.
	I0130 22:19:03.303108  680786 kubeadm.go:581] duration metric: took 5.980212644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:19:03.303133  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:19:03.481755  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:19:03.481794  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:19:03.481804  680786 node_conditions.go:105] duration metric: took 178.666283ms to run NodePressure ...
	I0130 22:19:03.481816  680786 start.go:228] waiting for startup goroutines ...
	I0130 22:19:03.481822  680786 start.go:233] waiting for cluster config update ...
	I0130 22:19:03.481860  680786 start.go:242] writing updated cluster config ...
	I0130 22:19:03.482145  680786 ssh_runner.go:195] Run: rm -f paused
	I0130 22:19:03.549733  680786 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 22:19:03.551653  680786 out.go:177] * Done! kubectl is now configured to use "no-preload-023824" cluster and "default" namespace by default
	I0130 22:19:01.614025  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.810450  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.573311695s)
	I0130 22:19:03.810519  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810531  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810592  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599920536s)
	I0130 22:19:03.810625  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422412443s)
	I0130 22:19:03.810639  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810653  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810640  680821 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:03.811010  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811010  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811035  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811034  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811038  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811045  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811055  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811056  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811065  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811074  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811299  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811317  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811626  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811677  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811686  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838002  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.838036  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.838339  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.838364  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838384  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842042  680821 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.227988129s)
	I0130 22:19:03.842085  680821 node_ready.go:35] waiting up to 6m0s for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.842321  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.283887868s)
	I0130 22:19:03.842355  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842369  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.842728  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842753  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.842761  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.842772  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842784  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.843015  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.843031  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.843042  680821 addons.go:470] Verifying addon metrics-server=true in "embed-certs-713938"
	I0130 22:19:03.844872  680821 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:03.846361  680821 addons.go:505] enable addons completed in 2.848549166s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:03.857259  680821 node_ready.go:49] node "embed-certs-713938" has status "Ready":"True"
	I0130 22:19:03.857281  680821 node_ready.go:38] duration metric: took 15.183316ms waiting for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.857290  680821 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:03.880136  680821 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392506  680821 pod_ready.go:92] pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.392542  680821 pod_ready.go:81] duration metric: took 1.512370879s waiting for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392556  680821 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402272  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.402382  680821 pod_ready.go:81] duration metric: took 9.816254ms waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402410  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414813  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.414844  680821 pod_ready.go:81] duration metric: took 12.42049ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414861  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424628  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.424651  680821 pod_ready.go:81] duration metric: took 9.782ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424660  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445652  680821 pod_ready.go:92] pod "kube-proxy-f7mgv" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.445679  680821 pod_ready.go:81] duration metric: took 21.012459ms waiting for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445692  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.459758  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.204942723s)
	I0130 22:19:07.459833  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:07.475749  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:19:07.487056  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:19:07.498268  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:19:07.498316  681007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:19:07.552393  681007 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:19:07.552482  681007 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:19:07.703415  681007 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:19:07.703558  681007 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:19:07.703688  681007 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:19:07.929127  681007 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:19:07.931129  681007 out.go:204]   - Generating certificates and keys ...
	I0130 22:19:07.931256  681007 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:19:07.931340  681007 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:19:07.931443  681007 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:19:07.931568  681007 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:19:07.931907  681007 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:19:07.933061  681007 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:19:07.934226  681007 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:19:07.935564  681007 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:19:07.936846  681007 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:19:07.938253  681007 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:19:07.939205  681007 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:19:07.939281  681007 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:19:08.017218  681007 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:19:08.179939  681007 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:19:08.390089  681007 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:19:08.500690  681007 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:19:08.501201  681007 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:19:08.506551  681007 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:19:08.508442  681007 out.go:204]   - Booting up control plane ...
	I0130 22:19:08.508554  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:19:08.508643  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:19:08.509176  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:19:08.528978  681007 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:19:08.529909  681007 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:19:08.530016  681007 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:19:08.657813  681007 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:19:05.846282  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.846316  680821 pod_ready.go:81] duration metric: took 400.615309ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.846329  680821 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.854210  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:10.354894  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:12.358737  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:14.361808  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:16.661056  681007 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003483 seconds
	I0130 22:19:16.663313  681007 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:19:16.682919  681007 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:19:17.218185  681007 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:19:17.218446  681007 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-850803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:19:17.733745  681007 kubeadm.go:322] [bootstrap-token] Using token: oi6eg1.osding0t7oyyeu0p
	I0130 22:19:17.735211  681007 out.go:204]   - Configuring RBAC rules ...
	I0130 22:19:17.735388  681007 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:19:17.744899  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:19:17.754341  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:19:17.758107  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:19:17.761508  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:19:17.765503  681007 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:19:17.781414  681007 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:19:18.095502  681007 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:19:18.190245  681007 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:19:18.190272  681007 kubeadm.go:322] 
	I0130 22:19:18.190348  681007 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:19:18.190360  681007 kubeadm.go:322] 
	I0130 22:19:18.190452  681007 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:19:18.190461  681007 kubeadm.go:322] 
	I0130 22:19:18.190493  681007 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:19:18.190604  681007 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:19:18.190702  681007 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:19:18.190716  681007 kubeadm.go:322] 
	I0130 22:19:18.190800  681007 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:19:18.190835  681007 kubeadm.go:322] 
	I0130 22:19:18.190892  681007 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:19:18.190906  681007 kubeadm.go:322] 
	I0130 22:19:18.190976  681007 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:19:18.191074  681007 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:19:18.191178  681007 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:19:18.191191  681007 kubeadm.go:322] 
	I0130 22:19:18.191293  681007 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:19:18.191416  681007 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:19:18.191438  681007 kubeadm.go:322] 
	I0130 22:19:18.191544  681007 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.191672  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:19:18.191703  681007 kubeadm.go:322] 	--control-plane 
	I0130 22:19:18.191714  681007 kubeadm.go:322] 
	I0130 22:19:18.191814  681007 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:19:18.191824  681007 kubeadm.go:322] 
	I0130 22:19:18.191936  681007 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.192085  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:19:18.192660  681007 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:19:18.192684  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:19:18.192692  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:19:18.194376  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:19:18.195608  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:19:18.244311  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:19:18.285107  681007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:19:18.285193  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.285210  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=default-k8s-diff-port-850803 minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.682930  681007 ops.go:34] apiserver oom_adj: -16
	I0130 22:19:18.683119  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:16.854674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:18.854723  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:19.184109  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:19.683715  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.183529  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.684197  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.184124  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.684022  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.184033  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.683812  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.184203  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.683513  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.857387  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:23.354163  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:25.354683  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:24.184064  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:24.683177  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.183896  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.683522  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.183779  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.683891  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.183468  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.683878  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.183471  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.683793  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.853744  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:30.356959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:29.183658  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:29.683264  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.183311  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.683828  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.183841  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.287952  681007 kubeadm.go:1088] duration metric: took 13.002835585s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:31.287988  681007 kubeadm.go:406] StartCluster complete in 5m12.874624935s
	I0130 22:19:31.288014  681007 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.288132  681007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:31.290435  681007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.290772  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:31.290924  681007 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:31.291004  681007 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291027  681007 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291024  681007 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850803"
	W0130 22:19:31.291035  681007 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:31.291044  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:31.291048  681007 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291053  681007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850803"
	I0130 22:19:31.291078  681007 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291084  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	W0130 22:19:31.291089  681007 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:31.291142  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.291497  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291528  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291577  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291578  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.308624  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0130 22:19:31.308641  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0130 22:19:31.308628  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0130 22:19:31.309140  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309143  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309231  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309662  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309683  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309807  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309825  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309829  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309837  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.310304  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310324  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310621  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.310944  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.310983  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.311193  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.311237  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.314600  681007 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-850803"
	W0130 22:19:31.314619  681007 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:31.314641  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.314888  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.314923  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.331266  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0130 22:19:31.331358  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0130 22:19:31.332259  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332277  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332769  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332791  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.332930  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332949  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.333243  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333307  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333459  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.333534  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.335458  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.337520  681007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:31.335819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.338601  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0130 22:19:31.338925  681007 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.338944  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:31.338969  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.340850  681007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:31.339883  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.341794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.342314  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.342344  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:31.342364  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:31.342381  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.342456  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.342572  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.342787  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.342807  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.342806  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.343515  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.344047  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.344096  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.345163  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346044  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.346073  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346341  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.346515  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.346617  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.346703  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.360658  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0130 22:19:31.361009  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.361631  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.361653  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.362059  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.362284  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.363819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.364079  681007 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.364091  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:31.364104  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.367056  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367482  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.367508  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367705  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.367877  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.368024  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.368159  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.486668  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:31.512324  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.548212  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:31.548241  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:31.565423  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.607291  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:31.607318  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:31.647162  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.647192  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:31.723006  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.913300  681007 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850803" context rescaled to 1 replicas
	I0130 22:19:31.913355  681007 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:31.915323  681007 out.go:177] * Verifying Kubernetes components...
	I0130 22:19:31.916700  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:33.003770  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.517052198s)
	I0130 22:19:33.003803  681007 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:33.533121  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020753837s)
	I0130 22:19:33.533193  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533208  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533167  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967690921s)
	I0130 22:19:33.533306  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533322  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533714  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533727  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533728  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533738  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533747  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533745  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533759  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533769  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533802  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533973  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533987  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.535503  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.535515  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.535531  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.628879  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.628911  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.629222  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.629249  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.629251  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.742264  681007 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.825530161s)
	I0130 22:19:33.742301  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.019251933s)
	I0130 22:19:33.742328  681007 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.742355  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742371  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.742681  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.742701  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.742712  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742736  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.743035  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.743058  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.743072  681007 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:33.745046  681007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:33.746494  681007 addons.go:505] enable addons completed in 2.455579767s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:33.792738  681007 node_ready.go:49] node "default-k8s-diff-port-850803" has status "Ready":"True"
	I0130 22:19:33.792765  681007 node_ready.go:38] duration metric: took 50.422631ms waiting for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.792774  681007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:33.814090  681007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:32.853930  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.854970  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.821685  681007 pod_ready.go:92] pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.821713  681007 pod_ready.go:81] duration metric: took 1.007586687s waiting for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.821725  681007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827824  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.827846  681007 pod_ready.go:81] duration metric: took 6.114329ms waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827855  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835557  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.835577  681007 pod_ready.go:81] duration metric: took 7.716283ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835586  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846707  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.846730  681007 pod_ready.go:81] duration metric: took 11.137144ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846742  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855583  681007 pod_ready.go:92] pod "kube-proxy-9b97q" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:35.855607  681007 pod_ready.go:81] duration metric: took 1.00885903s waiting for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855616  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146642  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:36.146669  681007 pod_ready.go:81] duration metric: took 291.044646ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146679  681007 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:38.154183  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:37.354609  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:39.854928  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:40.154641  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:42.159531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:41.855320  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.354523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.654954  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:47.154579  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:46.355021  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:48.853459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:49.653829  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:51.655608  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:50.853891  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:52.854695  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:55.354018  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:54.154453  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:56.155065  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:58.657247  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:57.853975  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:00.354902  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:01.153907  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:03.654237  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:02.854731  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:05.356880  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:06.155143  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:08.155296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:07.856132  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.356464  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.155799  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.654333  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.853942  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.354885  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.154056  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.154535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.853402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:20.353980  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:19.655422  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.154392  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.354117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.355044  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.155171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.655471  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.854532  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.354204  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.154677  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.654466  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.356403  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:33.356906  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:34.154078  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:36.654298  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:35.853262  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:37.857523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:40.354097  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:39.154049  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:41.654457  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:43.654895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:42.355195  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:44.854639  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:45.655775  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:48.155289  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:47.357754  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:49.855799  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:50.155498  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.655409  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.353449  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:54.354453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:55.155034  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:57.654844  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:56.354612  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:58.854992  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:59.655694  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.656577  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.353141  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:03.353830  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:04.154299  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:06.654312  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.654807  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:05.854650  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.353951  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.354031  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.655061  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.655432  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.354994  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:14.855265  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:15.159097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:17.653783  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:16.857702  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.359396  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.655858  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:22.156091  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:21.854394  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.354360  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.655296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:27.158080  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:26.855014  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.356117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.653580  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:32.154606  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:31.854704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.355484  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.654068  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.654158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.654269  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.357452  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.855223  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:40.655689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.154796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:41.354371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.854228  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:45.155130  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:47.155889  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:46.355266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:48.355485  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:50.362578  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:49.653701  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:51.655019  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:52.854642  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:55.353605  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:54.154411  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:56.654614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:58.660728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:57.854182  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:00.354287  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:01.155135  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:03.654733  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:02.853711  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:04.854845  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:05.656121  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:08.154541  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:07.353888  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:09.354542  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:10.653671  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:12.657917  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:11.854575  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:14.354327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:15.157012  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:17.158822  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:16.354558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:18.355214  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:19.655591  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.154262  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:20.855145  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.855595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:25.354646  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:24.654590  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:26.655050  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:27.357453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.854619  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.154225  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.156000  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:33.654263  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.855106  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:34.354611  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:35.654550  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:37.654631  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:36.856135  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.354424  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.655008  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.657897  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.659483  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.354687  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.354978  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:46.154172  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:48.154643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:45.853374  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:47.854345  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.353899  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.655054  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.655335  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.354795  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.853217  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.655525  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:57.153994  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:56.856987  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.353446  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.157129  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.655835  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.657302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.355499  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.356368  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:06.154373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:08.654373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854404  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854432  680821 pod_ready.go:81] duration metric: took 4m0.008096056s waiting for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:05.854442  680821 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:05.854449  680821 pod_ready.go:38] duration metric: took 4m1.997150293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:05.854467  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:05.854502  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:05.854561  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:05.929032  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:05.929061  680821 cri.go:89] found id: ""
	I0130 22:23:05.929073  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:05.929137  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.934693  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:05.934777  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:05.982312  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:05.982342  680821 cri.go:89] found id: ""
	I0130 22:23:05.982352  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:05.982417  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.986932  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:05.986988  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:06.031983  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.032007  680821 cri.go:89] found id: ""
	I0130 22:23:06.032015  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:06.032073  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.036373  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:06.036429  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:06.084796  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.084829  680821 cri.go:89] found id: ""
	I0130 22:23:06.084840  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:06.084908  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.089120  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:06.089185  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:06.139977  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.139998  680821 cri.go:89] found id: ""
	I0130 22:23:06.140006  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:06.140063  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.144088  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:06.144147  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:06.185075  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.185103  680821 cri.go:89] found id: ""
	I0130 22:23:06.185113  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:06.185164  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.189014  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:06.189070  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:06.223430  680821 cri.go:89] found id: ""
	I0130 22:23:06.223459  680821 logs.go:284] 0 containers: []
	W0130 22:23:06.223469  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:06.223477  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:06.223529  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:06.260048  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.260071  680821 cri.go:89] found id: ""
	I0130 22:23:06.260083  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:06.260141  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.263987  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:06.264013  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:06.315899  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:06.315930  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:06.366903  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:06.366935  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.406395  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:06.406429  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.445937  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:06.445967  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:06.507335  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:06.507368  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.559276  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:06.559313  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.618349  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:06.618390  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.660376  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:06.660410  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:07.080461  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:07.080504  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:07.153607  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.153767  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.176441  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:07.176475  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:07.191016  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:07.191045  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:07.338888  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.338919  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:07.339094  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:07.339109  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.339121  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.339129  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.339142  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:10.656229  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:13.154689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:15.156258  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.654584  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.340518  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:17.358757  680821 api_server.go:72] duration metric: took 4m15.748181205s to wait for apiserver process to appear ...
	I0130 22:23:17.358785  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:17.358824  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:17.358882  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:17.402796  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:17.402819  680821 cri.go:89] found id: ""
	I0130 22:23:17.402827  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:17.402878  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.408452  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:17.408525  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:17.454148  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.454174  680821 cri.go:89] found id: ""
	I0130 22:23:17.454185  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:17.454260  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.458375  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:17.458450  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:17.508924  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:17.508953  680821 cri.go:89] found id: ""
	I0130 22:23:17.508960  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:17.509011  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.512833  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:17.512900  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:17.556821  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:17.556849  680821 cri.go:89] found id: ""
	I0130 22:23:17.556857  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:17.556913  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.561605  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:17.561666  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:17.604962  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.604991  680821 cri.go:89] found id: ""
	I0130 22:23:17.605001  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:17.605078  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.611321  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:17.611395  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:17.651827  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:17.651860  680821 cri.go:89] found id: ""
	I0130 22:23:17.651869  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:17.651918  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.656414  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:17.656472  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:17.696085  680821 cri.go:89] found id: ""
	I0130 22:23:17.696120  680821 logs.go:284] 0 containers: []
	W0130 22:23:17.696130  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:17.696139  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:17.696197  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:17.742145  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.742171  680821 cri.go:89] found id: ""
	I0130 22:23:17.742183  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:17.742248  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.746837  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:17.746861  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:17.864654  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:17.864691  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.917753  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:17.917785  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.958876  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:17.958914  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.997774  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:17.997811  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:18.047778  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:18.047823  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:18.111572  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:18.111621  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:18.489601  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:18.489683  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:18.549905  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:18.549953  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:18.631865  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.632060  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.656777  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:18.656813  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:18.670944  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:18.670973  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:18.726388  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:18.726424  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:18.766317  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766350  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:18.766427  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:18.766446  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.766460  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.766473  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766485  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:20.155531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:22.654846  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:25.153520  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:27.158571  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:28.767516  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:23:28.774562  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:23:28.775796  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:28.775824  680821 api_server.go:131] duration metric: took 11.417031075s to wait for apiserver health ...
	I0130 22:23:28.775834  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:28.775860  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:28.775909  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:28.821439  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:28.821462  680821 cri.go:89] found id: ""
	I0130 22:23:28.821490  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:28.821556  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.826438  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:28.826495  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:28.870075  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:28.870104  680821 cri.go:89] found id: ""
	I0130 22:23:28.870113  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:28.870169  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.874672  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:28.874741  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:28.917733  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:28.917761  680821 cri.go:89] found id: ""
	I0130 22:23:28.917775  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:28.917835  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.925522  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:28.925586  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:28.979761  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:28.979793  680821 cri.go:89] found id: ""
	I0130 22:23:28.979803  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:28.979866  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.983990  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:28.984044  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:29.022516  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.022543  680821 cri.go:89] found id: ""
	I0130 22:23:29.022553  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:29.022604  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.026989  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:29.027069  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:29.065167  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.065194  680821 cri.go:89] found id: ""
	I0130 22:23:29.065205  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:29.065268  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.069436  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:29.069512  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:29.109503  680821 cri.go:89] found id: ""
	I0130 22:23:29.109532  680821 logs.go:284] 0 containers: []
	W0130 22:23:29.109539  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:29.109546  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:29.109599  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:29.158319  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:29.158343  680821 cri.go:89] found id: ""
	I0130 22:23:29.158350  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:29.158437  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.163004  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:29.163025  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:29.540158  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:29.540203  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:29.616783  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:29.616947  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:29.638172  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:29.638207  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:29.761562  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:29.761613  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:29.803930  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:29.803976  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:29.866722  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:29.866763  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.912093  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:29.912125  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.970591  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:29.970624  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:29.984722  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:29.984748  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:30.040548  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:30.040589  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:30.089982  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:30.090027  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:30.128235  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:30.128267  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:30.169872  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.169906  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:30.169982  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:30.169997  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:30.170008  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:30.170026  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.170035  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:29.653518  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:32.155147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:34.653672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:36.155187  681007 pod_ready.go:81] duration metric: took 4m0.008494222s waiting for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:36.155214  681007 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:36.155224  681007 pod_ready.go:38] duration metric: took 4m2.362439314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:36.155243  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:36.155283  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:36.155343  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:36.205838  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:36.205866  681007 cri.go:89] found id: ""
	I0130 22:23:36.205875  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:36.205945  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.210477  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:36.210558  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:36.253110  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:36.253139  681007 cri.go:89] found id: ""
	I0130 22:23:36.253148  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:36.253204  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.257054  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:36.257124  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:36.296932  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.296959  681007 cri.go:89] found id: ""
	I0130 22:23:36.296971  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:36.297034  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.301030  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:36.301080  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:36.339966  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:36.339992  681007 cri.go:89] found id: ""
	I0130 22:23:36.340002  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:36.340062  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.345411  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:36.345474  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:36.389010  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.389031  681007 cri.go:89] found id: ""
	I0130 22:23:36.389039  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:36.389091  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.392885  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:36.392969  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:36.430208  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:36.430228  681007 cri.go:89] found id: ""
	I0130 22:23:36.430237  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:36.430282  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.434507  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:36.434562  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:36.483517  681007 cri.go:89] found id: ""
	I0130 22:23:36.483542  681007 logs.go:284] 0 containers: []
	W0130 22:23:36.483549  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:36.483555  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:36.483613  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:36.543345  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:36.543370  681007 cri.go:89] found id: ""
	I0130 22:23:36.543380  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:36.543445  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.548033  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:36.548064  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:36.630123  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630304  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630456  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630629  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:36.651951  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:36.651990  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:36.667227  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:36.667261  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:36.815056  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:36.815097  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.856960  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:36.856992  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.903856  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:36.903909  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:37.318919  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:37.318964  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:37.368999  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:37.369037  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:37.412698  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:37.412727  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:37.459356  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:37.459389  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:37.509418  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:37.509454  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:37.551349  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:37.551392  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:37.597863  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597892  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:37.597945  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:37.597958  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597964  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597976  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597982  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:37.597988  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597998  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:40.180631  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:23:40.180660  680821 system_pods.go:61] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.180665  680821 system_pods.go:61] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.180669  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.180674  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.180678  680821 system_pods.go:61] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.180683  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.180693  680821 system_pods.go:61] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.180701  680821 system_pods.go:61] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.180710  680821 system_pods.go:74] duration metric: took 11.404869748s to wait for pod list to return data ...
	I0130 22:23:40.180749  680821 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:23:40.184327  680821 default_sa.go:45] found service account: "default"
	I0130 22:23:40.184349  680821 default_sa.go:55] duration metric: took 3.590968ms for default service account to be created ...
	I0130 22:23:40.184356  680821 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:23:40.194745  680821 system_pods.go:86] 8 kube-system pods found
	I0130 22:23:40.194769  680821 system_pods.go:89] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.194774  680821 system_pods.go:89] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.194779  680821 system_pods.go:89] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.194784  680821 system_pods.go:89] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.194788  680821 system_pods.go:89] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.194791  680821 system_pods.go:89] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.194800  680821 system_pods.go:89] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.194805  680821 system_pods.go:89] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.194812  680821 system_pods.go:126] duration metric: took 10.451241ms to wait for k8s-apps to be running ...
	I0130 22:23:40.194817  680821 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:23:40.194866  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:23:40.214067  680821 system_svc.go:56] duration metric: took 19.241185ms WaitForService to wait for kubelet.
	I0130 22:23:40.214091  680821 kubeadm.go:581] duration metric: took 4m38.603520566s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:23:40.214134  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:23:40.217725  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:23:40.217791  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:23:40.217812  680821 node_conditions.go:105] duration metric: took 3.672364ms to run NodePressure ...
	I0130 22:23:40.217827  680821 start.go:228] waiting for startup goroutines ...
	I0130 22:23:40.217840  680821 start.go:233] waiting for cluster config update ...
	I0130 22:23:40.217857  680821 start.go:242] writing updated cluster config ...
	I0130 22:23:40.218114  680821 ssh_runner.go:195] Run: rm -f paused
	I0130 22:23:40.275722  680821 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:23:40.278571  680821 out.go:177] * Done! kubectl is now configured to use "embed-certs-713938" cluster and "default" namespace by default
	I0130 22:23:47.599324  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:47.615605  681007 api_server.go:72] duration metric: took 4m15.702208866s to wait for apiserver process to appear ...
	I0130 22:23:47.615630  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:47.615671  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:47.615722  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:47.660944  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:47.660980  681007 cri.go:89] found id: ""
	I0130 22:23:47.660997  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:47.661051  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.666115  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:47.666180  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:47.709726  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:47.709750  681007 cri.go:89] found id: ""
	I0130 22:23:47.709760  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:47.709821  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.714636  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:47.714691  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:47.760216  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:47.760245  681007 cri.go:89] found id: ""
	I0130 22:23:47.760262  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:47.760323  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.765395  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:47.765450  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:47.815572  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:47.815604  681007 cri.go:89] found id: ""
	I0130 22:23:47.815614  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:47.815674  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.819670  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:47.819729  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:47.858767  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:47.858795  681007 cri.go:89] found id: ""
	I0130 22:23:47.858805  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:47.858865  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.863151  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:47.863276  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:47.911294  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:47.911319  681007 cri.go:89] found id: ""
	I0130 22:23:47.911327  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:47.911387  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.915772  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:47.915852  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:47.952096  681007 cri.go:89] found id: ""
	I0130 22:23:47.952125  681007 logs.go:284] 0 containers: []
	W0130 22:23:47.952136  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:47.952144  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:47.952229  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:47.990137  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:47.990162  681007 cri.go:89] found id: ""
	I0130 22:23:47.990170  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:47.990228  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.994880  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:47.994902  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:48.068521  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068700  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068849  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.069010  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.091781  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:48.091820  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:48.213688  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:48.213724  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:48.264200  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:48.264234  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:48.319751  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:48.319785  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:48.357815  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:48.357846  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:48.406822  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:48.406858  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:48.419822  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:48.419852  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:48.471685  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:48.471719  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:48.508040  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:48.508088  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:48.559268  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:48.559302  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:48.609976  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:48.610007  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:48.966774  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966810  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:48.966900  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:48.966912  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966919  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966927  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966934  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.966939  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966945  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:58.967938  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:23:58.973850  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:23:58.975689  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:58.975713  681007 api_server.go:131] duration metric: took 11.360076324s to wait for apiserver health ...
	I0130 22:23:58.975720  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:58.975745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:58.975793  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:59.023436  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:59.023458  681007 cri.go:89] found id: ""
	I0130 22:23:59.023466  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:59.023514  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.027855  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:59.027916  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:59.067167  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:59.067194  681007 cri.go:89] found id: ""
	I0130 22:23:59.067204  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:59.067266  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.076124  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:59.076191  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:59.115918  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:59.115947  681007 cri.go:89] found id: ""
	I0130 22:23:59.115956  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:59.116011  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.120440  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:59.120489  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:59.165157  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.165185  681007 cri.go:89] found id: ""
	I0130 22:23:59.165194  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:59.165254  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.169774  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:59.169845  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:59.230609  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:59.230640  681007 cri.go:89] found id: ""
	I0130 22:23:59.230650  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:59.230713  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.235563  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:59.235653  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:59.279835  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.279866  681007 cri.go:89] found id: ""
	I0130 22:23:59.279886  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:59.279954  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.284745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:59.284809  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:59.331328  681007 cri.go:89] found id: ""
	I0130 22:23:59.331361  681007 logs.go:284] 0 containers: []
	W0130 22:23:59.331374  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:59.331380  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:59.331432  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:59.370468  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.370493  681007 cri.go:89] found id: ""
	I0130 22:23:59.370501  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:59.370553  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.375047  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:59.375075  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.428263  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:59.428297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.495321  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:59.495356  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.537553  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:59.537590  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:59.915651  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:59.915691  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:59.930178  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:59.930209  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:24:00.070621  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:24:00.070656  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:24:00.111617  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:24:00.111655  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:24:00.156067  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:24:00.156104  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:24:00.206264  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:24:00.206292  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:24:00.282212  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282436  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282642  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282805  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.304194  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:24:00.304223  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:24:00.355473  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:24:00.355508  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:24:00.402962  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403001  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:24:00.403077  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:24:00.403092  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403101  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403114  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403124  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.403136  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403144  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:24:10.411200  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:24:10.411225  681007 system_pods.go:61] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.411231  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.411235  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.411239  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.411242  681007 system_pods.go:61] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.411246  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.411252  681007 system_pods.go:61] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.411258  681007 system_pods.go:61] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.411264  681007 system_pods.go:74] duration metric: took 11.435539762s to wait for pod list to return data ...
	I0130 22:24:10.411274  681007 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:24:10.413887  681007 default_sa.go:45] found service account: "default"
	I0130 22:24:10.413915  681007 default_sa.go:55] duration metric: took 2.635544ms for default service account to be created ...
	I0130 22:24:10.413923  681007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:24:10.420235  681007 system_pods.go:86] 8 kube-system pods found
	I0130 22:24:10.420256  681007 system_pods.go:89] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.420263  681007 system_pods.go:89] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.420271  681007 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.420281  681007 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.420290  681007 system_pods.go:89] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.420301  681007 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.420311  681007 system_pods.go:89] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.420319  681007 system_pods.go:89] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.420327  681007 system_pods.go:126] duration metric: took 6.398195ms to wait for k8s-apps to be running ...
	I0130 22:24:10.420335  681007 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:24:10.420386  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:24:10.438372  681007 system_svc.go:56] duration metric: took 18.027152ms WaitForService to wait for kubelet.
	I0130 22:24:10.438396  681007 kubeadm.go:581] duration metric: took 4m38.525004918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:24:10.438424  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:24:10.441514  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:24:10.441561  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:24:10.441572  681007 node_conditions.go:105] duration metric: took 3.14294ms to run NodePressure ...
	I0130 22:24:10.441583  681007 start.go:228] waiting for startup goroutines ...
	I0130 22:24:10.441591  681007 start.go:233] waiting for cluster config update ...
	I0130 22:24:10.441601  681007 start.go:242] writing updated cluster config ...
	I0130 22:24:10.441855  681007 ssh_runner.go:195] Run: rm -f paused
	I0130 22:24:10.493274  681007 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:24:10.495414  681007 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:13:18 UTC, ends at Tue 2024-01-30 22:28:05 UTC. --
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.392098440Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1706653140070663976,StartedAt:1706653140126745741,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/985cd51e-1832-487e-af5b-6a29108fc494/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/985cd51e-1832-487e-af5b-6a29108fc494/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/985cd51e-1832-487e-af5b-6a29108fc494/containers/coredns/3d682e1d,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/l
ib/kubelet/pods/985cd51e-1832-487e-af5b-6a29108fc494/volumes/kubernetes.io~projected/kube-api-access-jg6w4,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-znj8f_985cd51e-1832-487e-af5b-6a29108fc494/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=e8e4e575-5924-497b-9308-8c05fc19eb1e name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.392988392Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=64186157-7a39-4760-94a2-68bbaf52ee83 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.393098256Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1706653139858360019,StartedAt:1706653139915200643,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/coredns/coredns:v1.11.1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\
"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/coredns,HostPath:/var/lib/kubelet/pods/e5470bf8-982d-4707-8cd8-c0c0228219fa/volumes/kubernetes.io~configmap/config-volume,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/e5470bf8-982d-4707-8cd8-c0c0228219fa/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/e5470bf8-982d-4707-8cd8-c0c0228219fa/containers/coredns/28beb991,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/l
ib/kubelet/pods/e5470bf8-982d-4707-8cd8-c0c0228219fa/volumes/kubernetes.io~projected/kube-api-access-skmsw,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_coredns-76f75df574-rktrb_e5470bf8-982d-4707-8cd8-c0c0228219fa/coredns/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=64186157-7a39-4760-94a2-68bbaf52ee83 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.393611642Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=35d3b1e9-633c-4360-9434-ed79fbb89e8f name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.393722666Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1706653117414774563,StartedAt:1706653118877502847,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/604ca0fe424ef8aca193b8f29827fac1/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/604ca0fe424ef8aca193b8f29827fac1/containers/kube-scheduler/e4f4c39c,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-023824_604ca0fe424ef8aca193b8f29827fac1/kube-scheduler/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=35d3b1e9-633c-4360-9434-ed79fbb89e8f name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.394236858Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=baf0015f-f6ad-45f1-bb8c-ba1455020398 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.394343688Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1706653117397143771,StartedAt:1706653118815895260,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/etcd:3.5.10-0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/7bc1fdf64040bcde5c69fa9202b40e1a/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/7bc1fdf64040bcde5c69fa9202b40e1a/containers/etcd/2f29cbc1,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/etcd,HostPath:/var/lib/minikube/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs/etcd,HostPath:/var/lib/minikube/certs/etcd,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_etcd-no-preload-023824_7bc1fdf64040bcde5c69fa9202b40e1a/etcd/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=baf0015f-f6ad-45f1-bb8c-ba1455020398 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.395037244Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=70282c72-1893-4247-bfad-c31436a8364a name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.395170468Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1706653116859436617,StartedAt:1706653117956181375,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/0e9683a8229c0ddfd9d2b4f98700fe81/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/0e9683a8229c0ddfd9d2b4f98700fe81/containers/kube-controller-manager/3ac44183,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_
PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-023824_0e9683a8229c0ddfd9d2b4f98700fe81/kube-controller-manager/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=70282c72-1893-4247-bfad-c31436a8364a name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.395657295Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=6a37ecb5-cc09-4607-8f78-6106b7cf97a2 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.395766628Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},State:CONTAINER_RUNNING,CreatedAt:1706653116707647850,StartedAt:1706653117531447612,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-apiserver:v1.29.0-rc.2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/te
rmination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/05fe72f0b32ea68e0f89c1642a7c70f5/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/05fe72f0b32ea68e0f89c1642a7c70f5/containers/kube-apiserver/66375544,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/kube-system_kube-apiserver-no-preload-023824_05fe72f
0b32ea68e0f89c1642a7c70f5/kube-apiserver/2.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=6a37ecb5-cc09-4607-8f78-6106b7cf97a2 name=/runtime.v1.RuntimeService/ContainerStatus
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.414710193Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=8e3f04ca-e3f7-4817-85b3-1fc197e431b8 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.414767956Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=8e3f04ca-e3f7-4817-85b3-1fc197e431b8 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.416366492Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ea95e89b-b474-435b-85b9-e5f9f37c3613 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.416747601Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653685416736478,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ea95e89b-b474-435b-85b9-e5f9f37c3613 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.417425025Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=404bc87c-0ae0-4f04-973f-5a960a067037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.417518613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=404bc87c-0ae0-4f04-973f-5a960a067037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.417756517Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=404bc87c-0ae0-4f04-973f-5a960a067037 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.450958977Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1b2a1233-0f16-4e62-a263-9f9718fc2390 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.451037557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1b2a1233-0f16-4e62-a263-9f9718fc2390 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.452664350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=150593bf-2fd2-4f8e-b4f9-cb31fbf4ecc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.453084549Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653685453068791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=150593bf-2fd2-4f8e-b4f9-cb31fbf4ecc0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.454533917Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=251098cb-31a6-4dca-a5e0-11d44e53f93b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.454600332Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=251098cb-31a6-4dca-a5e0-11d44e53f93b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:28:05 no-preload-023824 crio[710]: time="2024-01-30 22:28:05.454851890Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=251098cb-31a6-4dca-a5e0-11d44e53f93b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e38cae605fb7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2ee105736d6a2       storage-provisioner
	a3c418d415d66       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   9 minutes ago       Running             kube-proxy                0                   f21c5be3455f4       kube-proxy-8rn6v
	9966c08a886d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   7514cc9f6e7b2       coredns-76f75df574-znj8f
	5a605eb28b73f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   9 minutes ago       Running             coredns                   0                   dba9fe5afbe2d       coredns-76f75df574-rktrb
	725f7cb519d6c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   9 minutes ago       Running             kube-scheduler            2                   30b263bb4490b       kube-scheduler-no-preload-023824
	0319bb836f3b9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   9 minutes ago       Running             etcd                      2                   55ae03c8d1cc0       etcd-no-preload-023824
	9c7ed3f938b75       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   9 minutes ago       Running             kube-controller-manager   2                   1e4e292748c05       kube-controller-manager-no-preload-023824
	fc1282976c3bf       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   9 minutes ago       Running             kube-apiserver            2                   d0d9d3e1f76a8       kube-apiserver-no-preload-023824
	
	
	==> coredns [5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44861 - 51576 "HINFO IN 6624304217867511352.7498625084823510745. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034469514s
	
	
	==> coredns [9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57767 - 45205 "HINFO IN 3851438653245790492.3560899168353838747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024764723s
	
	
	==> describe nodes <==
	Name:               no-preload-023824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-023824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=no-preload-023824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:18:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-023824
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:27:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:24:12 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:24:12 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:24:12 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:24:12 +0000   Tue, 30 Jan 2024 22:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.232
	  Hostname:    no-preload-023824
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2211a4f4aa3d427eb950d566eb36f14d
	  System UUID:                2211a4f4-aa3d-427e-b950-d566eb36f14d
	  Boot ID:                    fd69ba0b-2106-47cf-bc46-c0af7535ee48
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rktrb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 coredns-76f75df574-znj8f                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m8s
	  kube-system                 etcd-no-preload-023824                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-apiserver-no-preload-023824             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-no-preload-023824    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-proxy-8rn6v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m8s
	  kube-system                 kube-scheduler-no-preload-023824             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-nvplb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m4s   kube-proxy       
	  Normal  Starting                 9m21s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m21s  kubelet          Node no-preload-023824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m21s  kubelet          Node no-preload-023824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m21s  kubelet          Node no-preload-023824 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m21s  kubelet          Node no-preload-023824 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m21s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m21s  kubelet          Node no-preload-023824 status is now: NodeReady
	  Normal  RegisteredNode           9m9s   node-controller  Node no-preload-023824 event: Registered Node no-preload-023824 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067703] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.333805] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.359432] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147710] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.350380] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.406055] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.113337] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.163267] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.117992] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.207025] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +28.802704] systemd-fstab-generator[1325]: Ignoring "noauto" for root device
	[Jan30 22:14] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:18] systemd-fstab-generator[3900]: Ignoring "noauto" for root device
	[  +9.298514] systemd-fstab-generator[4232]: Ignoring "noauto" for root device
	[ +13.237525] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9] <==
	{"level":"info","ts":"2024-01-30T22:18:38.911566Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e switched to configuration voters=(17981991576283729038)"}
	{"level":"info","ts":"2024-01-30T22:18:38.911636Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b57bc7a6641489a","local-member-id":"f98cde10e2754c8e","added-peer-id":"f98cde10e2754c8e","added-peer-peer-urls":["https://192.168.61.232:2380"]}
	{"level":"info","ts":"2024-01-30T22:18:38.913641Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-01-30T22:18:38.913838Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.232:2380"}
	{"level":"info","ts":"2024-01-30T22:18:38.913968Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.232:2380"}
	{"level":"info","ts":"2024-01-30T22:18:38.915707Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T22:18:38.915638Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f98cde10e2754c8e","initial-advertise-peer-urls":["https://192.168.61.232:2380"],"listen-peer-urls":["https://192.168.61.232:2380"],"advertise-client-urls":["https://192.168.61.232:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.232:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T22:18:39.080472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e received MsgPreVoteResp from f98cde10e2754c8e at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.080933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e received MsgVoteResp from f98cde10e2754c8e at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.080975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became leader at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.081049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f98cde10e2754c8e elected leader f98cde10e2754c8e at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.083465Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.085095Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f98cde10e2754c8e","local-member-attributes":"{Name:no-preload-023824 ClientURLs:[https://192.168.61.232:2379]}","request-path":"/0/members/f98cde10e2754c8e/attributes","cluster-id":"b57bc7a6641489a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T22:18:39.08532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:39.085876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b57bc7a6641489a","local-member-id":"f98cde10e2754c8e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.085983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.086013Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.087006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:39.087875Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:39.088694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:39.089649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:18:39.093237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.232:2379"}
	
	
	==> kernel <==
	 22:28:05 up 14 min,  0 users,  load average: 0.77, 0.50, 0.30
	Linux no-preload-023824 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3] <==
	I0130 22:21:59.886709       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:23:41.021724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:23:41.021915       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0130 22:23:42.022538       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:23:42.022635       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:23:42.022663       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:23:42.022731       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:23:42.022904       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:23:42.024230       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:24:42.023157       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:24:42.023237       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:24:42.023247       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:24:42.024471       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:24:42.024573       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:24:42.024582       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:26:42.024175       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:26:42.024580       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:26:42.024633       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:26:42.024915       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:26:42.025001       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:26:42.026647       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8] <==
	I0130 22:22:26.685315       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:22:56.247872       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:22:56.693486       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:23:26.253664       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:23:26.701752       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:23:56.260943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:23:56.718966       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:24:26.267577       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:24:26.728066       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:24:45.728642       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="470.196µs"
	E0130 22:24:56.273323       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:24:56.737510       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:24:59.725089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="82.751µs"
	E0130 22:25:26.279267       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:25:26.745649       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:25:56.285409       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:25:56.755240       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:26:26.291403       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:26:26.764489       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:26:56.297979       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:26:56.773041       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:27:26.303749       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:27:26.784587       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:27:56.310491       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:27:56.793055       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46] <==
	I0130 22:19:01.035924       1 server_others.go:72] "Using iptables proxy"
	I0130 22:19:01.061685       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.232"]
	I0130 22:19:01.181469       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:01.181538       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:01.181557       1 server_others.go:168] "Using iptables Proxier"
	I0130 22:19:01.191214       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:01.191576       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0130 22:19:01.191630       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:01.194144       1 config.go:188] "Starting service config controller"
	I0130 22:19:01.194206       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:01.194230       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:01.194265       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:01.196301       1 config.go:315] "Starting node config controller"
	I0130 22:19:01.196347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:01.295360       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:01.295407       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:01.297050       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a] <==
	W0130 22:18:41.050062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:18:41.050103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:18:41.863064       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:18:41.863161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:18:42.028971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:18:42.029087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0130 22:18:42.130463       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 22:18:42.130524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 22:18:42.133313       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:42.133366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:42.155116       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:18:42.155194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:18:42.159048       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:42.159108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:42.179095       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:42.179147       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:42.183608       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:18:42.183680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:18:42.192991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:18:42.193037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 22:18:42.207361       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:18:42.207481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:18:42.283463       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:18:42.283630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0130 22:18:45.142386       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:13:18 UTC, ends at Tue 2024-01-30 22:28:06 UTC. --
	Jan 30 22:25:13 no-preload-023824 kubelet[4239]: E0130 22:25:13.706568    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:25:25 no-preload-023824 kubelet[4239]: E0130 22:25:25.707137    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:25:38 no-preload-023824 kubelet[4239]: E0130 22:25:38.707248    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:25:44 no-preload-023824 kubelet[4239]: E0130 22:25:44.737333    4239 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:25:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:25:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:25:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:25:51 no-preload-023824 kubelet[4239]: E0130 22:25:51.707761    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:26:04 no-preload-023824 kubelet[4239]: E0130 22:26:04.707478    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:26:19 no-preload-023824 kubelet[4239]: E0130 22:26:19.707639    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:26:31 no-preload-023824 kubelet[4239]: E0130 22:26:31.707643    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:26:44 no-preload-023824 kubelet[4239]: E0130 22:26:44.737595    4239 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:26:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:26:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:26:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:26:46 no-preload-023824 kubelet[4239]: E0130 22:26:46.706927    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:26:58 no-preload-023824 kubelet[4239]: E0130 22:26:58.707833    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:27:11 no-preload-023824 kubelet[4239]: E0130 22:27:11.707855    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:27:26 no-preload-023824 kubelet[4239]: E0130 22:27:26.708551    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:27:40 no-preload-023824 kubelet[4239]: E0130 22:27:40.707693    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:27:44 no-preload-023824 kubelet[4239]: E0130 22:27:44.737716    4239 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:27:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:27:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:27:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:27:55 no-preload-023824 kubelet[4239]: E0130 22:27:55.707491    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	
	
	==> storage-provisioner [e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082] <==
	I0130 22:19:01.174072       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:01.190337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:01.202361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:01.227441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:01.227627       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027!
	I0130 22:19:01.228606       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a498934e-fe5b-481a-835e-acf300322c01", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027 became leader
	I0130 22:19:01.328396       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-023824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nvplb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb: exit status 1 (67.020301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nvplb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713938 -n embed-certs-713938
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:32:40.882000608 +0000 UTC m=+5532.282942730
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-713938 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-713938 logs -n 25: (1.667501326s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:09:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:09:08.900187  681007 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:09:08.900447  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900456  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:09:08.900460  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900635  681007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:09:08.901158  681007 out.go:303] Setting JSON to false
	I0130 22:09:08.902121  681007 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10301,"bootTime":1706642248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:09:08.902185  681007 start.go:138] virtualization: kvm guest
	I0130 22:09:08.904443  681007 out.go:177] * [default-k8s-diff-port-850803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:09:08.905904  681007 notify.go:220] Checking for updates...
	I0130 22:09:08.905916  681007 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:09:08.907548  681007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:09:08.908959  681007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:09:08.910401  681007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:09:08.911766  681007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:09:08.913044  681007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:09:08.914682  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:09:08.915157  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.915201  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.929650  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0130 22:09:08.930098  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.930701  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.930721  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.931048  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.931239  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.931458  681007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:09:08.931745  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.931778  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.946395  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0130 22:09:08.946754  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.947305  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.947328  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.947686  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.947865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.982088  681007 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 22:09:08.983300  681007 start.go:298] selected driver: kvm2
	I0130 22:09:08.983312  681007 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.983408  681007 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:09:08.984088  681007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:08.984161  681007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:09:08.997808  681007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:09:08.998205  681007 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:09:08.998285  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:09:08.998305  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:09:08.998323  681007 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85080
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.998554  681007 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:09.000506  681007 out.go:177] * Starting control plane node default-k8s-diff-port-850803 in cluster default-k8s-diff-port-850803
	I0130 22:09:09.417791  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:09.001801  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:09:09.001832  681007 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:09:09.001844  681007 cache.go:56] Caching tarball of preloaded images
	I0130 22:09:09.001930  681007 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:09:09.001942  681007 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:09:09.002074  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:09:09.002279  681007 start.go:365] acquiring machines lock for default-k8s-diff-port-850803: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:09:15.497723  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:18.569709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:24.649709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:27.721682  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:33.801746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:36.873758  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:42.953715  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:46.025774  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:52.105752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:55.177803  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:01.257740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:04.329775  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:10.409748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:13.481709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:19.561742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:22.634236  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:28.713807  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:31.785746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:37.865734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:40.937754  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:47.017740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:50.089744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:56.169767  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:59.241735  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:05.321760  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:08.393763  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:14.473745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:17.545673  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:23.625780  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:26.697711  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:32.777688  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:35.849700  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:41.929752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:45.001744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:51.081733  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:54.153686  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:00.233749  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:03.305724  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:09.385748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:12.457710  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:18.537805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:21.609734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:27.689765  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:30.761718  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:36.841762  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:39.913805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:45.993742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:49.065753  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:55.145745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:58.217703  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.302231  680786 start.go:369] acquired machines lock for "no-preload-023824" in 4m22.656152529s
	I0130 22:13:07.302304  680786 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:07.302314  680786 fix.go:54] fixHost starting: 
	I0130 22:13:07.302790  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:07.302835  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:07.317987  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0130 22:13:07.318451  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:07.318943  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:13:07.318965  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:07.319340  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:07.319538  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:07.319679  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:13:07.321151  680786 fix.go:102] recreateIfNeeded on no-preload-023824: state=Stopped err=<nil>
	I0130 22:13:07.321173  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	W0130 22:13:07.321343  680786 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:07.322929  680786 out.go:177] * Restarting existing kvm2 VM for "no-preload-023824" ...
	I0130 22:13:04.297739  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.299984  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:07.300024  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:13:07.302029  680506 machine.go:91] provisioned docker machine in 4m44.646018806s
	I0130 22:13:07.302108  680506 fix.go:56] fixHost completed within 4m44.666279152s
	I0130 22:13:07.302116  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 4m44.666320503s
	W0130 22:13:07.302153  680506 start.go:694] error starting host: provision: host is not running
	W0130 22:13:07.302282  680506 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 22:13:07.302293  680506 start.go:709] Will try again in 5 seconds ...
	I0130 22:13:07.324101  680786 main.go:141] libmachine: (no-preload-023824) Calling .Start
	I0130 22:13:07.324252  680786 main.go:141] libmachine: (no-preload-023824) Ensuring networks are active...
	I0130 22:13:07.325034  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network default is active
	I0130 22:13:07.325415  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network mk-no-preload-023824 is active
	I0130 22:13:07.325804  680786 main.go:141] libmachine: (no-preload-023824) Getting domain xml...
	I0130 22:13:07.326696  680786 main.go:141] libmachine: (no-preload-023824) Creating domain...
	I0130 22:13:08.499216  680786 main.go:141] libmachine: (no-preload-023824) Waiting to get IP...
	I0130 22:13:08.500483  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.500933  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.501067  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.500931  681630 retry.go:31] will retry after 268.447444ms: waiting for machine to come up
	I0130 22:13:08.771705  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.772073  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.772101  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.772010  681630 retry.go:31] will retry after 235.233391ms: waiting for machine to come up
	I0130 22:13:09.008402  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.008795  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.008826  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.008757  681630 retry.go:31] will retry after 433.981592ms: waiting for machine to come up
	I0130 22:13:09.444576  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.444963  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.445001  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.444900  681630 retry.go:31] will retry after 518.108537ms: waiting for machine to come up
	I0130 22:13:12.306584  680506 start.go:365] acquiring machines lock for old-k8s-version-912992: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:13:09.964605  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.964956  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.964985  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.964919  681630 retry.go:31] will retry after 497.667085ms: waiting for machine to come up
	I0130 22:13:10.464522  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:10.464897  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:10.464930  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:10.464853  681630 retry.go:31] will retry after 918.136538ms: waiting for machine to come up
	I0130 22:13:11.384191  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:11.384665  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:11.384719  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:11.384630  681630 retry.go:31] will retry after 942.595537ms: waiting for machine to come up
	I0130 22:13:12.328976  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:12.329412  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:12.329438  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:12.329365  681630 retry.go:31] will retry after 1.080632129s: waiting for machine to come up
	I0130 22:13:13.411494  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:13.411880  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:13.411905  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:13.411830  681630 retry.go:31] will retry after 1.70851135s: waiting for machine to come up
	I0130 22:13:15.122731  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:15.123212  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:15.123244  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:15.123164  681630 retry.go:31] will retry after 1.890143577s: waiting for machine to come up
	I0130 22:13:17.016347  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:17.016789  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:17.016812  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:17.016745  681630 retry.go:31] will retry after 2.710901352s: waiting for machine to come up
	I0130 22:13:19.731235  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:19.731687  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:19.731717  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:19.731628  681630 retry.go:31] will retry after 3.494667363s: waiting for machine to come up
	I0130 22:13:23.227477  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:23.227894  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:23.227927  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:23.227844  681630 retry.go:31] will retry after 4.45900259s: waiting for machine to come up
	I0130 22:13:28.902379  680821 start.go:369] acquired machines lock for "embed-certs-713938" in 4m43.197815022s
	I0130 22:13:28.902454  680821 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:28.902466  680821 fix.go:54] fixHost starting: 
	I0130 22:13:28.902824  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:28.902863  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:28.922121  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0130 22:13:28.922554  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:28.923019  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:13:28.923040  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:28.923378  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:28.923587  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:28.923730  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:13:28.925000  680821 fix.go:102] recreateIfNeeded on embed-certs-713938: state=Stopped err=<nil>
	I0130 22:13:28.925042  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	W0130 22:13:28.925225  680821 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:28.927620  680821 out.go:177] * Restarting existing kvm2 VM for "embed-certs-713938" ...
	I0130 22:13:27.688611  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689047  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has current primary IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689078  680786 main.go:141] libmachine: (no-preload-023824) Found IP for machine: 192.168.61.232
	I0130 22:13:27.689095  680786 main.go:141] libmachine: (no-preload-023824) Reserving static IP address...
	I0130 22:13:27.689540  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.689585  680786 main.go:141] libmachine: (no-preload-023824) DBG | skip adding static IP to network mk-no-preload-023824 - found existing host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"}
	I0130 22:13:27.689610  680786 main.go:141] libmachine: (no-preload-023824) Reserved static IP address: 192.168.61.232
	I0130 22:13:27.689630  680786 main.go:141] libmachine: (no-preload-023824) Waiting for SSH to be available...
	I0130 22:13:27.689645  680786 main.go:141] libmachine: (no-preload-023824) DBG | Getting to WaitForSSH function...
	I0130 22:13:27.691725  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692037  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.692060  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692196  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH client type: external
	I0130 22:13:27.692236  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa (-rw-------)
	I0130 22:13:27.692288  680786 main.go:141] libmachine: (no-preload-023824) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:27.692305  680786 main.go:141] libmachine: (no-preload-023824) DBG | About to run SSH command:
	I0130 22:13:27.692318  680786 main.go:141] libmachine: (no-preload-023824) DBG | exit 0
	I0130 22:13:27.784900  680786 main.go:141] libmachine: (no-preload-023824) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:27.785232  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetConfigRaw
	I0130 22:13:27.786142  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:27.788581  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.788961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.788997  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.789280  680786 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/config.json ...
	I0130 22:13:27.789457  680786 machine.go:88] provisioning docker machine ...
	I0130 22:13:27.789489  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:27.789691  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.789857  680786 buildroot.go:166] provisioning hostname "no-preload-023824"
	I0130 22:13:27.789879  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.790013  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.792055  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792370  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.792405  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792478  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.792643  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.792790  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.793010  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.793205  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.793814  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.793842  680786 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-023824 && echo "no-preload-023824" | sudo tee /etc/hostname
	I0130 22:13:27.931141  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-023824
	
	I0130 22:13:27.931176  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.933882  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934242  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.934277  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934403  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.934588  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934748  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934917  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.935106  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.935413  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.935438  680786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-023824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-023824/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-023824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:28.067312  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:28.067345  680786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:28.067368  680786 buildroot.go:174] setting up certificates
	I0130 22:13:28.067380  680786 provision.go:83] configureAuth start
	I0130 22:13:28.067389  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:28.067687  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.070381  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070751  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.070787  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070891  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.073317  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073672  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.073704  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073925  680786 provision.go:138] copyHostCerts
	I0130 22:13:28.074050  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:28.074092  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:28.074186  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:28.074311  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:28.074330  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:28.074381  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:28.074474  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:28.074485  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:28.074527  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:28.074604  680786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.no-preload-023824 san=[192.168.61.232 192.168.61.232 localhost 127.0.0.1 minikube no-preload-023824]
	I0130 22:13:28.175428  680786 provision.go:172] copyRemoteCerts
	I0130 22:13:28.175531  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:28.175566  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.178015  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178376  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.178416  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178540  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.178705  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.178860  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.179029  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.265687  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:28.287768  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:28.309363  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:28.331204  680786 provision.go:86] duration metric: configureAuth took 263.811459ms
	I0130 22:13:28.331232  680786 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:28.331476  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:13:28.331568  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.333837  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334205  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.334243  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334421  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.334626  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334804  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334978  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.335183  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.335552  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.335569  680786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:28.648182  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:28.648214  680786 machine.go:91] provisioned docker machine in 858.733436ms
	I0130 22:13:28.648228  680786 start.go:300] post-start starting for "no-preload-023824" (driver="kvm2")
	I0130 22:13:28.648254  680786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:28.648272  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.648633  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:28.648669  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.651616  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.651990  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.652019  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.652200  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.652427  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.652589  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.652737  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.742644  680786 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:28.746791  680786 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:28.746818  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:28.746949  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:28.747065  680786 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:28.747165  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:28.755371  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:28.776917  680786 start.go:303] post-start completed in 128.667778ms
	I0130 22:13:28.776944  680786 fix.go:56] fixHost completed within 21.474623735s
	I0130 22:13:28.776969  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.779261  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779562  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.779591  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779715  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.779938  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780109  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780291  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.780465  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.780778  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.780790  680786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:28.902234  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652808.852489807
	
	I0130 22:13:28.902258  680786 fix.go:206] guest clock: 1706652808.852489807
	I0130 22:13:28.902265  680786 fix.go:219] Guest: 2024-01-30 22:13:28.852489807 +0000 UTC Remote: 2024-01-30 22:13:28.776948754 +0000 UTC m=+284.278530089 (delta=75.541053ms)
	I0130 22:13:28.902285  680786 fix.go:190] guest clock delta is within tolerance: 75.541053ms
	I0130 22:13:28.902291  680786 start.go:83] releasing machines lock for "no-preload-023824", held for 21.600013123s
	I0130 22:13:28.902314  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.902603  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.905058  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905455  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.905516  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905584  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906376  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906578  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906653  680786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:28.906711  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.906863  680786 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:28.906902  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.909484  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909525  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909824  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909856  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909886  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909902  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909952  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910141  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910150  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910347  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910350  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.910620  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:29.028948  680786 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:29.034774  680786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:29.182970  680786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:29.190306  680786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:29.190375  680786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:29.205114  680786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:29.205135  680786 start.go:475] detecting cgroup driver to use...
	I0130 22:13:29.205195  680786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:29.220998  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:29.234283  680786 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:29.234332  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:29.246205  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:29.258169  680786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:29.366756  680786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:29.499821  680786 docker.go:233] disabling docker service ...
	I0130 22:13:29.499908  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:29.513281  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:29.526823  680786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:29.644395  680786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:29.756912  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:29.768811  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:29.785830  680786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:29.785897  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.794702  680786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:29.794755  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.803342  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.812148  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.820802  680786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:29.830052  680786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:29.838334  680786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:29.838402  680786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:29.849789  680786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:29.858298  680786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:29.968180  680786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:30.134232  680786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:30.134309  680786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:30.139054  680786 start.go:543] Will wait 60s for crictl version
	I0130 22:13:30.139130  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.142760  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:30.183071  680786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:30.183175  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.225981  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.276982  680786 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 22:13:28.928924  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Start
	I0130 22:13:28.929139  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring networks are active...
	I0130 22:13:28.929766  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network default is active
	I0130 22:13:28.930145  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network mk-embed-certs-713938 is active
	I0130 22:13:28.930485  680821 main.go:141] libmachine: (embed-certs-713938) Getting domain xml...
	I0130 22:13:28.931095  680821 main.go:141] libmachine: (embed-certs-713938) Creating domain...
	I0130 22:13:30.162733  680821 main.go:141] libmachine: (embed-certs-713938) Waiting to get IP...
	I0130 22:13:30.163807  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.164261  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.164352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.164238  681759 retry.go:31] will retry after 217.071442ms: waiting for machine to come up
	I0130 22:13:30.382542  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.382918  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.382952  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.382899  681759 retry.go:31] will retry after 372.773352ms: waiting for machine to come up
	I0130 22:13:30.278407  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:30.281307  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281730  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:30.281762  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281947  680786 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:30.285873  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:30.299947  680786 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:13:30.300015  680786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:30.342071  680786 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 22:13:30.342094  680786 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:13:30.342198  680786 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.342218  680786 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.342257  680786 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.342278  680786 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.342288  680786 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.342205  680786 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.342265  680786 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 22:13:30.342563  680786 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343800  680786 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 22:13:30.343838  680786 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.343804  680786 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343805  680786 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.343809  680786 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.343801  680786 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.514364  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 22:13:30.529476  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.537822  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.540358  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.546677  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.559021  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.559189  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.579664  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.721137  680786 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 22:13:30.721228  680786 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.721280  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.745682  680786 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 22:13:30.745742  680786 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.745796  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750720  680786 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 22:13:30.750770  680786 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.750821  680786 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 22:13:30.750841  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750854  680786 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.750897  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768135  680786 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 22:13:30.768182  680786 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.768199  680786 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 22:13:30.768243  680786 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.768289  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768303  680786 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 22:13:30.768246  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768384  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.768329  680786 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.768499  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.768527  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.785074  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.785548  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.895706  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.895775  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.895925  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.910469  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910496  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910549  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 22:13:30.910578  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910584  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 22:13:30.910580  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910664  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.910628  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:30.928331  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 22:13:30.928431  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:30.958095  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958123  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958140  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 22:13:30.958176  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958205  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958178  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958249  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 22:13:30.958182  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958271  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958290  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 22:13:33.833277  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.87499883s)
	I0130 22:13:33.833318  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 22:13:33.833336  680786 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.875036585s)
	I0130 22:13:33.833372  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 22:13:33.833366  680786 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:33.833461  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.757262  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.757819  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.757870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.757738  681759 retry.go:31] will retry after 414.437055ms: waiting for machine to come up
	I0130 22:13:31.174434  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.174883  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.174936  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.174831  681759 retry.go:31] will retry after 555.308421ms: waiting for machine to come up
	I0130 22:13:31.731536  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.732150  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.732188  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.732111  681759 retry.go:31] will retry after 484.945442ms: waiting for machine to come up
	I0130 22:13:32.218554  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:32.218989  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:32.219024  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:32.218934  681759 retry.go:31] will retry after 802.660361ms: waiting for machine to come up
	I0130 22:13:33.022920  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:33.023362  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:33.023397  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:33.023298  681759 retry.go:31] will retry after 990.694559ms: waiting for machine to come up
	I0130 22:13:34.015896  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:34.016379  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:34.016407  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:34.016345  681759 retry.go:31] will retry after 1.382435075s: waiting for machine to come up
	I0130 22:13:35.400870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:35.401294  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:35.401327  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:35.401233  681759 retry.go:31] will retry after 1.53975085s: waiting for machine to come up
	I0130 22:13:37.909186  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075686172s)
	I0130 22:13:37.909214  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 22:13:37.909257  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:37.909303  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:39.052225  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.142886078s)
	I0130 22:13:39.052285  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 22:13:39.052326  680786 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:39.052412  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:36.942944  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:36.943539  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:36.943580  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:36.943478  681759 retry.go:31] will retry after 1.888978312s: waiting for machine to come up
	I0130 22:13:38.834886  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:38.835467  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:38.835508  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:38.835393  681759 retry.go:31] will retry after 1.774102713s: waiting for machine to come up
	I0130 22:13:41.133330  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080888409s)
	I0130 22:13:41.133358  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 22:13:41.133383  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:41.133432  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:43.814683  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.681223745s)
	I0130 22:13:43.814716  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 22:13:43.814742  680786 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:43.814779  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:40.611628  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:40.612048  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:40.612083  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:40.611995  681759 retry.go:31] will retry after 2.428322726s: waiting for machine to come up
	I0130 22:13:43.041506  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:43.041916  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:43.041950  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:43.041859  681759 retry.go:31] will retry after 4.531865882s: waiting for machine to come up
	I0130 22:13:48.690103  681007 start.go:369] acquired machines lock for "default-k8s-diff-port-850803" in 4m39.687788229s
	I0130 22:13:48.690177  681007 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:48.690188  681007 fix.go:54] fixHost starting: 
	I0130 22:13:48.690569  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:48.690606  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:48.709730  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0130 22:13:48.710142  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:48.710684  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:13:48.710714  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:48.711070  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:48.711280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:13:48.711446  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:13:48.712865  681007 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850803: state=Stopped err=<nil>
	I0130 22:13:48.712909  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	W0130 22:13:48.713065  681007 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:48.716450  681007 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850803" ...
	I0130 22:13:48.717867  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Start
	I0130 22:13:48.718031  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring networks are active...
	I0130 22:13:48.718700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network default is active
	I0130 22:13:48.719030  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network mk-default-k8s-diff-port-850803 is active
	I0130 22:13:48.719391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Getting domain xml...
	I0130 22:13:48.720046  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Creating domain...
	I0130 22:13:44.761511  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 22:13:44.761571  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:44.761627  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:46.718526  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.956864919s)
	I0130 22:13:46.718569  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 22:13:46.718605  680786 cache_images.go:123] Successfully loaded all cached images
	I0130 22:13:46.718612  680786 cache_images.go:92] LoadImages completed in 16.376507144s
	I0130 22:13:46.718742  680786 ssh_runner.go:195] Run: crio config
	I0130 22:13:46.782286  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:13:46.782311  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:46.782332  680786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:46.782372  680786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-023824 NodeName:no-preload-023824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:46.782544  680786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-023824"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:46.782617  680786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-023824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:46.782674  680786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 22:13:46.792236  680786 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:46.792309  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:46.800361  680786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 22:13:46.816070  680786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 22:13:46.830820  680786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 22:13:46.846493  680786 ssh_runner.go:195] Run: grep 192.168.61.232	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:46.849883  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:46.861414  680786 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824 for IP: 192.168.61.232
	I0130 22:13:46.861442  680786 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:46.861617  680786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:46.861664  680786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:46.861767  680786 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.key
	I0130 22:13:46.861831  680786 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key.e2a9f73e
	I0130 22:13:46.861872  680786 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key
	I0130 22:13:46.862006  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:46.862040  680786 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:46.862051  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:46.862074  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:46.862095  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:46.862118  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:46.862163  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:46.863014  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:46.887626  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:13:46.910152  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:46.931711  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:46.953156  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:46.974390  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:46.996094  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:47.017226  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:47.038317  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:47.059119  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:47.080077  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:47.101123  680786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:47.116152  680786 ssh_runner.go:195] Run: openssl version
	I0130 22:13:47.121529  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:47.130166  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134329  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134391  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.139537  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:47.148157  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:47.156558  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160623  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160682  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.165652  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:47.174350  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:47.183169  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187220  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187245  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.192369  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:47.201432  680786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:47.205518  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:47.210821  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:47.216074  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:47.221255  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:47.226609  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:47.231891  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:47.237220  680786 kubeadm.go:404] StartCluster: {Name:no-preload-023824 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:47.237355  680786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:47.237395  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:47.277488  680786 cri.go:89] found id: ""
	I0130 22:13:47.277561  680786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:47.286193  680786 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:47.286220  680786 kubeadm.go:636] restartCluster start
	I0130 22:13:47.286276  680786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:47.294206  680786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.295888  680786 kubeconfig.go:92] found "no-preload-023824" server: "https://192.168.61.232:8443"
	I0130 22:13:47.299852  680786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:47.307350  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.307401  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.317985  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.808078  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.808141  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.819689  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.308177  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.308241  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.319138  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.808388  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.808448  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.819501  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:49.308165  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.308254  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.319364  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.577701  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578126  680821 main.go:141] libmachine: (embed-certs-713938) Found IP for machine: 192.168.72.213
	I0130 22:13:47.578150  680821 main.go:141] libmachine: (embed-certs-713938) Reserving static IP address...
	I0130 22:13:47.578166  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has current primary IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578564  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.578605  680821 main.go:141] libmachine: (embed-certs-713938) DBG | skip adding static IP to network mk-embed-certs-713938 - found existing host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"}
	I0130 22:13:47.578616  680821 main.go:141] libmachine: (embed-certs-713938) Reserved static IP address: 192.168.72.213
	I0130 22:13:47.578630  680821 main.go:141] libmachine: (embed-certs-713938) Waiting for SSH to be available...
	I0130 22:13:47.578646  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Getting to WaitForSSH function...
	I0130 22:13:47.580757  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581084  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.581120  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581221  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH client type: external
	I0130 22:13:47.581282  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa (-rw-------)
	I0130 22:13:47.581324  680821 main.go:141] libmachine: (embed-certs-713938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:47.581344  680821 main.go:141] libmachine: (embed-certs-713938) DBG | About to run SSH command:
	I0130 22:13:47.581357  680821 main.go:141] libmachine: (embed-certs-713938) DBG | exit 0
	I0130 22:13:47.669006  680821 main.go:141] libmachine: (embed-certs-713938) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:47.669397  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetConfigRaw
	I0130 22:13:47.670084  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.672437  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.672782  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.672806  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.673048  680821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/config.json ...
	I0130 22:13:47.673225  680821 machine.go:88] provisioning docker machine ...
	I0130 22:13:47.673243  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:47.673432  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673608  680821 buildroot.go:166] provisioning hostname "embed-certs-713938"
	I0130 22:13:47.673628  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673766  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.675747  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676016  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.676043  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676178  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.676351  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676484  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676618  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.676743  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.677070  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.677083  680821 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-713938 && echo "embed-certs-713938" | sudo tee /etc/hostname
	I0130 22:13:47.800976  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-713938
	
	I0130 22:13:47.801011  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.803566  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.803876  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.803901  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.804047  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.804235  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804417  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.804699  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.805016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.805033  680821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-713938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-713938/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-713938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:47.928846  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:47.928882  680821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:47.928908  680821 buildroot.go:174] setting up certificates
	I0130 22:13:47.928956  680821 provision.go:83] configureAuth start
	I0130 22:13:47.928976  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.929283  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.931756  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932014  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.932045  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932206  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.934351  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934647  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.934670  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934814  680821 provision.go:138] copyHostCerts
	I0130 22:13:47.934875  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:47.934889  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:47.934963  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:47.935072  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:47.935087  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:47.935120  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:47.935196  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:47.935206  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:47.935234  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:47.935349  680821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.embed-certs-713938 san=[192.168.72.213 192.168.72.213 localhost 127.0.0.1 minikube embed-certs-713938]
	I0130 22:13:47.995543  680821 provision.go:172] copyRemoteCerts
	I0130 22:13:47.995624  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:47.995659  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.998113  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998409  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.998436  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998636  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.998822  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.999004  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.999123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.086454  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:48.108713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:48.131124  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:48.153234  680821 provision.go:86] duration metric: configureAuth took 224.258095ms
	I0130 22:13:48.153269  680821 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:48.153447  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:13:48.153554  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.156268  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156673  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.156705  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156847  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.157070  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157294  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157481  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.157649  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.158119  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.158143  680821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:48.449095  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:48.449131  680821 machine.go:91] provisioned docker machine in 775.890813ms
	I0130 22:13:48.449146  680821 start.go:300] post-start starting for "embed-certs-713938" (driver="kvm2")
	I0130 22:13:48.449161  680821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:48.449185  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.449573  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:48.449605  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.452408  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.452831  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.452866  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.453009  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.453240  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.453416  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.453566  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.539764  680821 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:48.543876  680821 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:48.543905  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:48.543969  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:48.544045  680821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:48.544163  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:48.552947  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:48.573560  680821 start.go:303] post-start completed in 124.400867ms
	I0130 22:13:48.573588  680821 fix.go:56] fixHost completed within 19.671118722s
	I0130 22:13:48.573615  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.576352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576755  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.576777  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576965  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.577170  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577337  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.577708  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.578016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.578029  680821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:48.689910  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652828.640343702
	
	I0130 22:13:48.689937  680821 fix.go:206] guest clock: 1706652828.640343702
	I0130 22:13:48.689948  680821 fix.go:219] Guest: 2024-01-30 22:13:48.640343702 +0000 UTC Remote: 2024-01-30 22:13:48.573593176 +0000 UTC m=+303.018932163 (delta=66.750526ms)
	I0130 22:13:48.690012  680821 fix.go:190] guest clock delta is within tolerance: 66.750526ms
	I0130 22:13:48.690023  680821 start.go:83] releasing machines lock for "embed-certs-713938", held for 19.787596053s
	I0130 22:13:48.690062  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.690367  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:48.692836  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693147  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.693180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693372  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.693895  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694095  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694178  680821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:48.694232  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.694331  680821 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:48.694354  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.696786  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697137  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697205  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697357  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697529  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.697648  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697675  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697706  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.697830  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697910  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.697985  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.698143  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.698307  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.807627  680821 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:48.813332  680821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:48.953919  680821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:48.960672  680821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:48.960744  680821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:48.977684  680821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:48.977702  680821 start.go:475] detecting cgroup driver to use...
	I0130 22:13:48.977766  680821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:48.989811  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:49.001223  680821 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:49.001281  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:49.012649  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:49.024426  680821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:49.130220  680821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:49.248922  680821 docker.go:233] disabling docker service ...
	I0130 22:13:49.248999  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:49.262066  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:49.272736  680821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:49.394001  680821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:49.514043  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:49.526282  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:49.545253  680821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:49.545303  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.554715  680821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:49.554775  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.564248  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.573151  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.582148  680821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:49.591604  680821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:49.599683  680821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:49.599722  680821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:49.611807  680821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:49.622179  680821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:49.745824  680821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:49.924707  680821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:49.924788  680821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:49.930158  680821 start.go:543] Will wait 60s for crictl version
	I0130 22:13:49.930234  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:13:49.933971  680821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:49.973662  680821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:49.973736  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.018705  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.070907  680821 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:13:50.072352  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:50.075100  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075487  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:50.075519  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075750  680821 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:50.079538  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:50.093965  680821 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:13:50.094028  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:50.133425  680821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:13:50.133506  680821 ssh_runner.go:195] Run: which lz4
	I0130 22:13:50.137267  680821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:13:50.141273  680821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:13:50.141299  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:13:49.938197  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting to get IP...
	I0130 22:13:49.939301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939717  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939806  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:49.939711  681876 retry.go:31] will retry after 300.092754ms: waiting for machine to come up
	I0130 22:13:50.241301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241860  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241890  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.241804  681876 retry.go:31] will retry after 313.990905ms: waiting for machine to come up
	I0130 22:13:50.557661  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558161  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.558077  681876 retry.go:31] will retry after 484.197655ms: waiting for machine to come up
	I0130 22:13:51.043815  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044313  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044345  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.044255  681876 retry.go:31] will retry after 595.208415ms: waiting for machine to come up
	I0130 22:13:51.640765  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641244  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641281  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.641207  681876 retry.go:31] will retry after 646.272845ms: waiting for machine to come up
	I0130 22:13:52.288980  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289729  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:52.289599  681876 retry.go:31] will retry after 864.623353ms: waiting for machine to come up
	I0130 22:13:53.155328  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155826  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:53.155750  681876 retry.go:31] will retry after 943.126628ms: waiting for machine to come up
	I0130 22:13:49.807842  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.807941  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.826075  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.308394  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.308476  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.323858  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.807449  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.807538  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.823237  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.307590  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.307684  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.322999  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.807466  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.807551  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.822502  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.308300  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.308431  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.329435  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.808248  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.808379  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.823821  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.308375  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.308462  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.321178  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.807637  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.807748  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.823761  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:54.308223  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.308300  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.320791  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.023827  680821 crio.go:444] Took 1.886590 seconds to copy over tarball
	I0130 22:13:52.023892  680821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:13:55.116587  680821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092664003s)
	I0130 22:13:55.116614  680821 crio.go:451] Took 3.092762 seconds to extract the tarball
	I0130 22:13:55.116644  680821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:13:55.159215  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:55.210233  680821 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:13:55.210263  680821 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:13:55.210344  680821 ssh_runner.go:195] Run: crio config
	I0130 22:13:55.268468  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:13:55.268496  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:55.268519  680821 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:55.268545  680821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-713938 NodeName:embed-certs-713938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:55.268710  680821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-713938"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:55.268801  680821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-713938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:55.268880  680821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:13:55.278244  680821 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:55.278321  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:55.287034  680821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0130 22:13:55.302012  680821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:13:55.318716  680821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0130 22:13:55.335364  680821 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:55.338950  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:55.349780  680821 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938 for IP: 192.168.72.213
	I0130 22:13:55.349814  680821 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:55.350000  680821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:55.350058  680821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:55.350157  680821 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/client.key
	I0130 22:13:55.350242  680821 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key.0982f839
	I0130 22:13:55.350299  680821 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key
	I0130 22:13:55.350469  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:55.350520  680821 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:55.350539  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:55.350577  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:55.350612  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:55.350648  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:55.350707  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:55.351807  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:55.373160  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 22:13:55.394634  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:55.416281  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:55.438713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:55.460324  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:55.481480  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:55.502869  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:55.524520  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:55.547601  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:55.569483  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:55.590741  680821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:54.100347  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:54.100763  681876 retry.go:31] will retry after 1.412406258s: waiting for machine to come up
	I0130 22:13:55.514929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515302  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515362  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:55.515267  681876 retry.go:31] will retry after 1.440442596s: waiting for machine to come up
	I0130 22:13:56.957895  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958367  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958390  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:56.958326  681876 retry.go:31] will retry after 1.996277334s: waiting for machine to come up
	I0130 22:13:54.807936  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.808021  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.824410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.307845  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.307937  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.320645  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.808272  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.808384  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.820051  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.307482  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.307567  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.319410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.808044  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.808167  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.820440  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.308301  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.308409  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.323612  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.323650  680786 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:13:57.323715  680786 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:13:57.323733  680786 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:13:57.323798  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:57.364379  680786 cri.go:89] found id: ""
	I0130 22:13:57.364467  680786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:13:57.380175  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:13:57.390701  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:13:57.390770  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400039  680786 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400071  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:57.546658  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.567155  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020447474s)
	I0130 22:13:58.567192  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.794332  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.875254  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.943890  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:13:58.944000  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:59.444721  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:55.608619  680821 ssh_runner.go:195] Run: openssl version
	I0130 22:13:55.880188  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:55.890762  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895346  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895423  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.900872  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:55.911050  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:55.921117  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925362  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925410  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.930499  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:55.940167  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:55.950284  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954643  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954688  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.959830  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:55.969573  680821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:55.973654  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:55.980878  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:55.988262  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:55.995379  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:56.002387  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:56.007729  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:56.013164  680821 kubeadm.go:404] StartCluster: {Name:embed-certs-713938 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:56.013256  680821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:56.013290  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:56.054588  680821 cri.go:89] found id: ""
	I0130 22:13:56.054670  680821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:56.064691  680821 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:56.064720  680821 kubeadm.go:636] restartCluster start
	I0130 22:13:56.064781  680821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:56.074132  680821 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.075653  680821 kubeconfig.go:92] found "embed-certs-713938" server: "https://192.168.72.213:8443"
	I0130 22:13:56.078677  680821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:56.087919  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.087968  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.099213  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.588843  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.588940  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.601681  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.088185  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.088291  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.103229  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.588880  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.589012  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.604127  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.088751  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.088880  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.100833  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.588147  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.588264  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.604368  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.088571  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.088681  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.104028  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.588569  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.588684  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.602995  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.088596  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.088729  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.104195  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.588883  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.588987  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.605168  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.956101  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956568  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956598  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:58.956511  681876 retry.go:31] will retry after 2.859682959s: waiting for machine to come up
	I0130 22:14:01.819863  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820443  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820476  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:01.820388  681876 retry.go:31] will retry after 2.840054468s: waiting for machine to come up
	I0130 22:13:59.945172  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.444900  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.945042  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.444410  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.486688  680786 api_server.go:72] duration metric: took 2.54280014s to wait for apiserver process to appear ...
	I0130 22:14:01.486719  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:01.486775  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.487585  680786 api_server.go:269] stopped: https://192.168.61.232:8443/healthz: Get "https://192.168.61.232:8443/healthz": dial tcp 192.168.61.232:8443: connect: connection refused
	I0130 22:14:01.987279  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.088999  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.089091  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.104740  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:01.588046  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.588171  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.603186  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.088381  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.088495  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.104148  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.588728  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.588850  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.603782  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.088297  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.088396  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.101192  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.588856  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.588967  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.600516  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.088592  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.088688  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.101572  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.588042  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.588181  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.600890  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.088324  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.088437  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.103896  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.588678  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.588786  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.604329  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.974310  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:04.974343  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:04.974361  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.032790  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.032856  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.032882  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.052788  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.052811  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.487474  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.494053  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.494084  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:05.987783  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.994015  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.994049  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:06.487723  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:06.492959  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:14:06.500169  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:14:06.500208  680786 api_server.go:131] duration metric: took 5.013479999s to wait for apiserver health ...
	I0130 22:14:06.500221  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:14:06.500230  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:06.502253  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:04.661649  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.661976  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.662010  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:04.661932  681876 retry.go:31] will retry after 4.414855002s: waiting for machine to come up
	I0130 22:14:06.503764  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:06.514909  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:06.534344  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:06.546282  680786 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:06.546323  680786 system_pods.go:61] "coredns-76f75df574-cvjdk" [3f6526d5-7bf6-4d51-96bc-9dc6f70ead98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:06.546333  680786 system_pods.go:61] "etcd-no-preload-023824" [89ebff7a-3ac5-4aa7-aab7-9c61e59027a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:06.546352  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [bea4217d-ad4c-4945-ac59-1589976698e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:06.546369  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [4a1866ae-14ce-4132-bc99-225c518ab4bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:06.546394  680786 system_pods.go:61] "kube-proxy-phh5j" [3e662e91-7886-44e7-87a0-4a727011062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:06.546407  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [ad7a7f1c-6aa6-4e16-94d5-e5db7d3e39f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:06.546422  680786 system_pods.go:61] "metrics-server-57f55c9bc5-qfj5x" [13ae9773-8607-43ae-a122-4f97b367a954] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:06.546433  680786 system_pods.go:61] "storage-provisioner" [50dd4d19-5e05-47b7-a11f-5975bc6ef0e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:06.546445  680786 system_pods.go:74] duration metric: took 12.076118ms to wait for pod list to return data ...
	I0130 22:14:06.546458  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:06.549604  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:06.549634  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:06.549645  680786 node_conditions.go:105] duration metric: took 3.179552ms to run NodePressure ...
	I0130 22:14:06.549662  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.858172  680786 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863712  680786 kubeadm.go:787] kubelet initialised
	I0130 22:14:06.863731  680786 kubeadm.go:788] duration metric: took 5.530573ms waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863738  680786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:06.869540  680786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:08.886275  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:10.543927  680506 start.go:369] acquired machines lock for "old-k8s-version-912992" in 58.237287777s
	I0130 22:14:10.543984  680506 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:14:10.543993  680506 fix.go:54] fixHost starting: 
	I0130 22:14:10.544466  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:14:10.544494  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:14:10.563544  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0130 22:14:10.564063  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:14:10.564683  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:14:10.564705  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:14:10.565128  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:14:10.565338  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:10.565526  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:14:10.567290  680506 fix.go:102] recreateIfNeeded on old-k8s-version-912992: state=Stopped err=<nil>
	I0130 22:14:10.567314  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	W0130 22:14:10.567565  680506 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:14:10.569441  680506 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-912992" ...
	I0130 22:14:06.089016  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:06.089138  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:06.101226  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:06.101265  680821 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:06.101276  680821 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:06.101292  680821 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:06.101373  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:06.145816  680821 cri.go:89] found id: ""
	I0130 22:14:06.145935  680821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:06.162118  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:06.174308  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:06.174379  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186134  680821 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186164  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.312544  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.860323  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.068181  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.151741  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.236354  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:07.236461  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:07.737169  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.237398  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.737483  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.237152  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.736646  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.763936  680821 api_server.go:72] duration metric: took 2.527584407s to wait for apiserver process to appear ...
	I0130 22:14:09.763962  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:09.763991  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:09.078352  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078935  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Found IP for machine: 192.168.50.254
	I0130 22:14:09.078985  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has current primary IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078997  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserving static IP address...
	I0130 22:14:09.079366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.079391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | skip adding static IP to network mk-default-k8s-diff-port-850803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"}
	I0130 22:14:09.079411  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Getting to WaitForSSH function...
	I0130 22:14:09.079431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserved static IP address: 192.168.50.254
	I0130 22:14:09.079442  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for SSH to be available...
	I0130 22:14:09.082189  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082612  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.082638  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082892  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH client type: external
	I0130 22:14:09.082917  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa (-rw-------)
	I0130 22:14:09.082982  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:09.082996  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | About to run SSH command:
	I0130 22:14:09.083009  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | exit 0
	I0130 22:14:09.182746  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:09.183304  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetConfigRaw
	I0130 22:14:09.184088  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.187115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187576  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.187606  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187972  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:14:09.188234  681007 machine.go:88] provisioning docker machine ...
	I0130 22:14:09.188262  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:09.188470  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188648  681007 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850803"
	I0130 22:14:09.188670  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188822  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.191366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191769  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.191808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.192148  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192332  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192488  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.192732  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.193245  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.193273  681007 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850803 && echo "default-k8s-diff-port-850803" | sudo tee /etc/hostname
	I0130 22:14:09.344664  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850803
	
	I0130 22:14:09.344700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.348016  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348485  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.348516  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348685  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.348962  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.349505  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.349996  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.350025  681007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:09.490740  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:09.490778  681007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:09.490812  681007 buildroot.go:174] setting up certificates
	I0130 22:14:09.490825  681007 provision.go:83] configureAuth start
	I0130 22:14:09.490844  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.491225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.494577  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495040  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.495085  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495194  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.497931  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498407  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.498433  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498638  681007 provision.go:138] copyHostCerts
	I0130 22:14:09.498702  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:09.498717  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:09.498778  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:09.498898  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:09.498912  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:09.498955  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:09.499039  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:09.499052  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:09.499080  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:09.499147  681007 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850803 san=[192.168.50.254 192.168.50.254 localhost 127.0.0.1 minikube default-k8s-diff-port-850803]
	I0130 22:14:09.749739  681007 provision.go:172] copyRemoteCerts
	I0130 22:14:09.749810  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:09.749848  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.753032  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753498  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.753533  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753727  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.753945  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.754170  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.754364  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:09.851640  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:09.879906  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 22:14:09.907030  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:09.934916  681007 provision.go:86] duration metric: configureAuth took 444.054165ms
	I0130 22:14:09.934954  681007 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:09.935190  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:14:09.935324  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.938507  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.938854  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.938894  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.939068  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.939312  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939517  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.939899  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.940390  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.940421  681007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:10.275894  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:10.275935  681007 machine.go:91] provisioned docker machine in 1.087679661s
	I0130 22:14:10.275950  681007 start.go:300] post-start starting for "default-k8s-diff-port-850803" (driver="kvm2")
	I0130 22:14:10.275965  681007 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:10.275989  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.276387  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:10.276445  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.279676  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280069  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.280115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280364  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.280584  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.280766  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.280923  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.373204  681007 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:10.377609  681007 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:10.377637  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:10.377705  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:10.377773  681007 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:10.377857  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:10.388096  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:10.414529  681007 start.go:303] post-start completed in 138.561717ms
	I0130 22:14:10.414557  681007 fix.go:56] fixHost completed within 21.7243684s
	I0130 22:14:10.414586  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.417282  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417709  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.417741  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417872  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.418063  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418233  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418356  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.418555  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:10.419070  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:10.419086  681007 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:10.543719  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652850.477584158
	
	I0130 22:14:10.543751  681007 fix.go:206] guest clock: 1706652850.477584158
	I0130 22:14:10.543762  681007 fix.go:219] Guest: 2024-01-30 22:14:10.477584158 +0000 UTC Remote: 2024-01-30 22:14:10.414562089 +0000 UTC m=+301.564256760 (delta=63.022069ms)
	I0130 22:14:10.543828  681007 fix.go:190] guest clock delta is within tolerance: 63.022069ms
	I0130 22:14:10.543837  681007 start.go:83] releasing machines lock for "default-k8s-diff-port-850803", held for 21.853682485s
	I0130 22:14:10.543884  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.544172  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:10.547453  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.547833  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.547907  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.548185  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554556  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554902  681007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:10.554975  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.555050  681007 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:10.555093  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.558413  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559108  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559387  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559438  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559764  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.559857  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.560050  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560137  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.560224  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560350  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560579  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560578  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.560760  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.681106  681007 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:10.688790  681007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:10.845108  681007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:10.853366  681007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:10.853540  681007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:10.873299  681007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:10.873326  681007 start.go:475] detecting cgroup driver to use...
	I0130 22:14:10.873426  681007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:10.891563  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:10.908180  681007 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:10.908258  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:10.921344  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:10.935068  681007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:11.036505  681007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:11.151640  681007 docker.go:233] disabling docker service ...
	I0130 22:14:11.151718  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:11.167082  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:11.178680  681007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:11.303325  681007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:11.410097  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:11.426297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:11.452546  681007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:14:11.452634  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.463081  681007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:11.463156  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.472742  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.482828  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.494761  681007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:11.507028  681007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:11.517686  681007 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:11.517742  681007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:11.530301  681007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:11.541975  681007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:11.696623  681007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:11.913271  681007 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:11.913391  681007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:11.919870  681007 start.go:543] Will wait 60s for crictl version
	I0130 22:14:11.919944  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:14:11.926064  681007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:11.975070  681007 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:11.975177  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.033039  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.081059  681007 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:14:10.570784  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Start
	I0130 22:14:10.571067  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring networks are active...
	I0130 22:14:10.571790  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network default is active
	I0130 22:14:10.572160  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network mk-old-k8s-version-912992 is active
	I0130 22:14:10.572697  680506 main.go:141] libmachine: (old-k8s-version-912992) Getting domain xml...
	I0130 22:14:10.573411  680506 main.go:141] libmachine: (old-k8s-version-912992) Creating domain...
	I0130 22:14:11.948333  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting to get IP...
	I0130 22:14:11.949455  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:11.950018  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:11.950060  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:11.949981  682021 retry.go:31] will retry after 276.511731ms: waiting for machine to come up
	I0130 22:14:12.228702  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.229508  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.229544  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.229445  682021 retry.go:31] will retry after 291.918453ms: waiting for machine to come up
	I0130 22:14:12.522882  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.523484  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.523520  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.523451  682021 retry.go:31] will retry after 411.891157ms: waiting for machine to come up
	I0130 22:14:12.082431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:12.085750  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086144  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:12.086175  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086400  681007 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:12.091494  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:12.104832  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:14:12.104904  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:12.160529  681007 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:14:12.160610  681007 ssh_runner.go:195] Run: which lz4
	I0130 22:14:12.165037  681007 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:12.169743  681007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:12.169772  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:14:11.379194  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.394473  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.254742  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.254788  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.254809  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.438140  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.438192  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.438210  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.470956  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.470985  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.764535  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.773346  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:13.773385  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.264393  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.277818  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:14.277878  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.764145  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.769720  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:14:14.778872  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:14.778910  680821 api_server.go:131] duration metric: took 5.01493889s to wait for apiserver health ...
	I0130 22:14:14.778923  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:14:14.778931  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:14.780880  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:14.782682  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:14.798955  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:14.824975  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:14.841121  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:14.841166  680821 system_pods.go:61] "coredns-5dd5756b68-wcncl" [43c0f4bc-1d47-4337-a179-bb27a4164ca5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:14.841177  680821 system_pods.go:61] "etcd-embed-certs-713938" [f8c3bfda-0fca-429b-a0a2-b4fc1d496085] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:14.841196  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [7536531d-a1bd-451b-8530-143f9a41b85c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:14.841209  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [76c2d0eb-823a-41df-91dc-584acb56f81e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:14.841222  680821 system_pods.go:61] "kube-proxy-4c6nn" [253bee90-32a4-4dc0-9db7-bdfa663bcc96] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:14.841233  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [3b4e8324-e074-45ab-b24c-df1bd226e12e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:14.841247  680821 system_pods.go:61] "metrics-server-57f55c9bc5-hcg7l" [25906794-7927-48cf-8f80-52f8a2a68d99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:14.841265  680821 system_pods.go:61] "storage-provisioner" [5820d2a9-be84-42e8-ac25-d4ac1cf22d90] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:14.841275  680821 system_pods.go:74] duration metric: took 16.275602ms to wait for pod list to return data ...
	I0130 22:14:14.841289  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:14.848145  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:14.848183  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:14.848198  680821 node_conditions.go:105] duration metric: took 6.903129ms to run NodePressure ...
	I0130 22:14:14.848221  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:15.186295  680821 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191845  680821 kubeadm.go:787] kubelet initialised
	I0130 22:14:15.191872  680821 kubeadm.go:788] duration metric: took 5.54389ms waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191883  680821 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:15.202037  680821 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:12.937414  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.938094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.938126  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.937994  682021 retry.go:31] will retry after 576.497569ms: waiting for machine to come up
	I0130 22:14:13.515903  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:13.516521  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:13.516547  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:13.516421  682021 retry.go:31] will retry after 519.706227ms: waiting for machine to come up
	I0130 22:14:14.037307  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.037937  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.037967  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.037845  682021 retry.go:31] will retry after 797.706186ms: waiting for machine to come up
	I0130 22:14:14.836997  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.837662  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.837686  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.837561  682021 retry.go:31] will retry after 782.265584ms: waiting for machine to come up
	I0130 22:14:15.621147  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:15.621747  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:15.621779  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:15.621706  682021 retry.go:31] will retry after 1.00093966s: waiting for machine to come up
	I0130 22:14:16.624002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:16.624474  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:16.624506  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:16.624365  682021 retry.go:31] will retry after 1.760162378s: waiting for machine to come up
	I0130 22:14:14.166451  681007 crio.go:444] Took 2.001438 seconds to copy over tarball
	I0130 22:14:14.166549  681007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:17.707309  681007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.540722039s)
	I0130 22:14:17.707346  681007 crio.go:451] Took 3.540858 seconds to extract the tarball
	I0130 22:14:17.707367  681007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:17.751814  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:17.817529  681007 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:14:17.817564  681007 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:14:17.817650  681007 ssh_runner.go:195] Run: crio config
	I0130 22:14:17.882693  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:17.882719  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:17.882745  681007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:17.882777  681007 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850803 NodeName:default-k8s-diff-port-850803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:14:17.882963  681007 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850803"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:17.883060  681007 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 22:14:17.883125  681007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:14:17.895645  681007 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:17.895725  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:17.906009  681007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0130 22:14:17.923445  681007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:17.941439  681007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0130 22:14:17.958729  681007 ssh_runner.go:195] Run: grep 192.168.50.254	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:17.962941  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:17.975030  681007 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803 for IP: 192.168.50.254
	I0130 22:14:17.975065  681007 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:17.975251  681007 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:17.975300  681007 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:17.975377  681007 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.key
	I0130 22:14:17.975436  681007 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key.c40bdd21
	I0130 22:14:17.975471  681007 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key
	I0130 22:14:17.975603  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:17.975634  681007 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:17.975642  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:17.975665  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:17.975689  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:17.975714  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:17.975751  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:17.976423  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:18.003363  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:18.029597  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:18.053558  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:14:18.077340  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:18.100959  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:18.124756  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:18.148266  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:18.171688  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:18.195020  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:18.221728  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:18.245353  681007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:18.262630  681007 ssh_runner.go:195] Run: openssl version
	I0130 22:14:18.268255  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:18.279361  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284264  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284318  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.290374  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:18.301414  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:18.312992  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317776  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317826  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.323596  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:18.334360  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:18.346052  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350871  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350917  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.358340  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:18.371640  681007 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:18.376906  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:18.383780  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:18.390468  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:18.396506  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:18.402525  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:18.407949  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:18.413375  681007 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:18.413454  681007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:18.413546  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:18.460309  681007 cri.go:89] found id: ""
	I0130 22:14:18.460393  681007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:18.474036  681007 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:18.474062  681007 kubeadm.go:636] restartCluster start
	I0130 22:14:18.474153  681007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:18.484682  681007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:18.486004  681007 kubeconfig.go:92] found "default-k8s-diff-port-850803" server: "https://192.168.50.254:8444"
	I0130 22:14:18.488661  681007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:18.499334  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:18.499389  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:18.512812  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:15.878232  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.047391  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:17.215329  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.367292  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:18.386828  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:18.387291  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:18.387324  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:18.387230  682021 retry.go:31] will retry after 1.961289931s: waiting for machine to come up
	I0130 22:14:20.351407  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:20.351939  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:20.351975  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:20.351883  682021 retry.go:31] will retry after 2.41188295s: waiting for machine to come up
	I0130 22:14:18.999791  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.011386  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.025823  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.499386  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.499505  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.513098  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.000365  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.000469  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.017498  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.500160  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.500286  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.517695  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.000275  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.000409  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.017613  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.499881  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.499974  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.516790  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.000448  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.000562  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.014377  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.499900  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.500014  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.513212  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.999725  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.999875  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.013983  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:23.499549  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.499654  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.515308  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.554357  680786 pod_ready.go:92] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.685256  680786 pod_ready.go:81] duration metric: took 12.815676408s waiting for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.685298  680786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705805  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.705843  680786 pod_ready.go:81] duration metric: took 20.535204ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705859  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716827  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.716859  680786 pod_ready.go:81] duration metric: took 10.990465ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716873  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224601  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.224631  680786 pod_ready.go:81] duration metric: took 507.749018ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224648  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231481  680786 pod_ready.go:92] pod "kube-proxy-phh5j" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.231507  680786 pod_ready.go:81] duration metric: took 6.849925ms waiting for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231519  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237347  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.237372  680786 pod_ready.go:81] duration metric: took 5.84531ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237383  680786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.246204  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:24.248275  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:21.709185  680821 pod_ready.go:92] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:21.709226  680821 pod_ready.go:81] duration metric: took 6.507155774s waiting for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:21.709240  680821 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716371  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.716398  680821 pod_ready.go:81] duration metric: took 2.007151614s waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716407  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722781  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.722803  680821 pod_ready.go:81] duration metric: took 6.390258ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722814  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729034  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.729055  680821 pod_ready.go:81] duration metric: took 6.235103ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729063  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737325  680821 pod_ready.go:92] pod "kube-proxy-4c6nn" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.737348  680821 pod_ready.go:81] duration metric: took 8.279273ms waiting for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737361  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.742989  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.743013  680821 pod_ready.go:81] duration metric: took 5.643901ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.743024  680821 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.766642  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:22.767267  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:22.767359  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:22.767247  682021 retry.go:31] will retry after 2.473522194s: waiting for machine to come up
	I0130 22:14:25.242661  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:25.243221  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:25.243246  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:25.243168  682021 retry.go:31] will retry after 4.117858968s: waiting for machine to come up
	I0130 22:14:23.999813  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.999897  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.012879  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.499381  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.499457  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.513834  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.999458  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.999554  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.014779  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.499957  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.500093  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.513275  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.999800  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.999901  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.011952  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.499447  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.499530  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.511962  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.999473  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.999579  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.012316  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:27.499767  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:27.499862  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.511793  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.000036  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.000127  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.012698  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.499393  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.499495  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.511459  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.511494  681007 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:28.511507  681007 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:28.511522  681007 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:28.511593  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:28.550124  681007 cri.go:89] found id: ""
	I0130 22:14:28.550200  681007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:28.566091  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:28.575952  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:28.576019  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584539  681007 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584559  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:28.715666  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:26.744291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.744825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:25.752959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.250440  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:30.251820  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:29.365529  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366106  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has current primary IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366142  680506 main.go:141] libmachine: (old-k8s-version-912992) Found IP for machine: 192.168.39.84
	I0130 22:14:29.366157  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserving static IP address...
	I0130 22:14:29.366732  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.366763  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserved static IP address: 192.168.39.84
	I0130 22:14:29.366789  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | skip adding static IP to network mk-old-k8s-version-912992 - found existing host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"}
	I0130 22:14:29.366805  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting for SSH to be available...
	I0130 22:14:29.366820  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Getting to WaitForSSH function...
	I0130 22:14:29.369195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369625  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.369648  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369851  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH client type: external
	I0130 22:14:29.369899  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa (-rw-------)
	I0130 22:14:29.369956  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:29.369986  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | About to run SSH command:
	I0130 22:14:29.370002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | exit 0
	I0130 22:14:29.469381  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:29.469800  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetConfigRaw
	I0130 22:14:29.470597  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.473253  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.473721  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.473748  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.474114  680506 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/config.json ...
	I0130 22:14:29.474312  680506 machine.go:88] provisioning docker machine ...
	I0130 22:14:29.474333  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:29.474552  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474741  680506 buildroot.go:166] provisioning hostname "old-k8s-version-912992"
	I0130 22:14:29.474767  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474946  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.477297  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477636  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.477677  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477927  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.478188  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478383  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478541  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.478761  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.479265  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.479291  680506 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-912992 && echo "old-k8s-version-912992" | sudo tee /etc/hostname
	I0130 22:14:29.626924  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-912992
	
	I0130 22:14:29.626957  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.630607  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631062  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.631094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631278  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.631514  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631696  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631891  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.632111  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.632505  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.632524  680506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-912992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-912992/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-912992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:29.777390  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:29.777424  680506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:29.777450  680506 buildroot.go:174] setting up certificates
	I0130 22:14:29.777484  680506 provision.go:83] configureAuth start
	I0130 22:14:29.777504  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.777846  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.781195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781632  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.781682  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781860  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.784395  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784744  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.784776  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784895  680506 provision.go:138] copyHostCerts
	I0130 22:14:29.784960  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:29.784973  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:29.785039  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:29.785139  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:29.785148  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:29.785173  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:29.785231  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:29.785240  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:29.785263  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:29.785404  680506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-912992 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube old-k8s-version-912992]
	I0130 22:14:30.047520  680506 provision.go:172] copyRemoteCerts
	I0130 22:14:30.047582  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:30.047607  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.050409  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050757  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.050790  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050992  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.051204  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.051345  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.051517  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.143197  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:30.164424  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 22:14:30.185497  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:30.207694  680506 provision.go:86] duration metric: configureAuth took 430.192351ms
	I0130 22:14:30.207731  680506 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:30.207938  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:14:30.208031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.210616  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.210984  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.211029  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.211184  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.211404  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211560  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211689  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.211838  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.212146  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.212161  680506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:30.548338  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:30.548369  680506 machine.go:91] provisioned docker machine in 1.074040133s
	I0130 22:14:30.548384  680506 start.go:300] post-start starting for "old-k8s-version-912992" (driver="kvm2")
	I0130 22:14:30.548397  680506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:30.548418  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.548802  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:30.548859  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.552482  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.552909  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.552945  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.553163  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.553368  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.553563  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.553702  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.649611  680506 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:30.654369  680506 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:30.654398  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:30.654527  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:30.654606  680506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:30.654692  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:30.664288  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:30.687603  680506 start.go:303] post-start completed in 139.202965ms
	I0130 22:14:30.687635  680506 fix.go:56] fixHost completed within 20.143642101s
	I0130 22:14:30.687663  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.690292  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690742  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.690780  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690973  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.691179  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691381  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691544  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.691751  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.692061  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.692072  680506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:30.827201  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652870.759760061
	
	I0130 22:14:30.827227  680506 fix.go:206] guest clock: 1706652870.759760061
	I0130 22:14:30.827237  680506 fix.go:219] Guest: 2024-01-30 22:14:30.759760061 +0000 UTC Remote: 2024-01-30 22:14:30.687640253 +0000 UTC m=+368.205420110 (delta=72.119808ms)
	I0130 22:14:30.827264  680506 fix.go:190] guest clock delta is within tolerance: 72.119808ms
	I0130 22:14:30.827276  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 20.283317012s
	I0130 22:14:30.827301  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.827604  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:30.830260  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830761  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.830797  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830974  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831570  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831747  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831856  680506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:30.831925  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.832004  680506 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:30.832031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.834970  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835316  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835340  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835377  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835539  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.835794  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835798  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.835816  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835964  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.836028  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836202  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.836228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.836375  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836573  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.931876  680506 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:30.959543  680506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:31.114259  680506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:31.122360  680506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:31.122498  680506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:31.142608  680506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:31.142637  680506 start.go:475] detecting cgroup driver to use...
	I0130 22:14:31.142709  680506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:31.159940  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:31.177310  680506 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:31.177394  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:31.197811  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:31.215942  680506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:31.341800  680506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:31.476217  680506 docker.go:233] disabling docker service ...
	I0130 22:14:31.476303  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:31.493525  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:31.505631  680506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:31.630766  680506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:31.744997  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:31.760432  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:31.778076  680506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 22:14:31.778156  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.788945  680506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:31.789063  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.799691  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.811057  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.822879  680506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:31.835071  680506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:31.844391  680506 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:31.844478  680506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:31.858948  680506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:31.868566  680506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:31.972874  680506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:32.150449  680506 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:32.150536  680506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:32.155130  680506 start.go:543] Will wait 60s for crictl version
	I0130 22:14:32.155192  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:32.158927  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:32.199472  680506 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:32.199568  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.245662  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.308945  680506 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 22:14:32.310311  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:32.313118  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313548  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:32.313596  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313777  680506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:32.317774  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:32.333291  680506 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 22:14:32.333356  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:32.389401  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:32.389494  680506 ssh_runner.go:195] Run: which lz4
	I0130 22:14:32.394618  680506 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:32.399870  680506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:32.399907  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 22:14:29.354779  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.576966  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.649608  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.729908  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:29.730008  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.230637  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.730130  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.231149  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.730722  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.230159  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.258815  681007 api_server.go:72] duration metric: took 2.528908545s to wait for apiserver process to appear ...
	I0130 22:14:32.258850  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:32.258872  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:31.245860  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:33.256817  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:32.753558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.761674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.208834  680506 crio.go:444] Took 1.814253 seconds to copy over tarball
	I0130 22:14:34.208929  680506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:37.177389  680506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.968423546s)
	I0130 22:14:37.177436  680506 crio.go:451] Took 2.968549 seconds to extract the tarball
	I0130 22:14:37.177450  680506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:37.233540  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:37.291641  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:37.291680  680506 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:14:37.291780  680506 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.291799  680506 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.291820  680506 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 22:14:37.291828  680506 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.291904  680506 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.291802  680506 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.292022  680506 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.291788  680506 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293663  680506 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.293740  680506 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293753  680506 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.293662  680506 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.293800  680506 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.293884  680506 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.492113  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.494903  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.495618  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 22:14:37.508190  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.512582  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.514112  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.259261  681007 api_server.go:269] stopped: https://192.168.50.254:8444/healthz: Get "https://192.168.50.254:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:37.259326  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:37.454899  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:37.454935  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:37.759230  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.420961  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.420997  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.421026  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.429934  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.429972  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.759948  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:35.746244  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.748221  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.252371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.752965  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:40.032924  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.032973  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.032996  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.076077  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.076109  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.259372  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.268746  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.268785  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.759307  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.764886  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:14:40.774834  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:40.774863  681007 api_server.go:131] duration metric: took 8.516004362s to wait for apiserver health ...
	I0130 22:14:40.774875  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:40.774883  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:40.776748  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:37.573794  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.589122  680506 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 22:14:37.589177  680506 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.589222  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.653263  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.661867  680506 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 22:14:37.661918  680506 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.661974  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.681759  680506 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 22:14:37.681810  680506 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 22:14:37.681868  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811285  680506 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 22:14:37.811334  680506 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.811398  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811403  680506 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 22:14:37.811441  680506 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.811507  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811522  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.811592  680506 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 22:14:37.811646  680506 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.811684  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 22:14:37.811508  680506 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 22:14:37.811723  680506 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.811694  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811753  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811648  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.828948  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.887304  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 22:14:37.887396  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.924180  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.934685  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 22:14:37.934737  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.934948  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 22:14:37.951228  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 22:14:37.955310  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 22:14:37.988234  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 22:14:38.007649  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 22:14:38.007710  680506 cache_images.go:92] LoadImages completed in 716.017973ms
	W0130 22:14:38.007789  680506 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0130 22:14:38.007920  680506 ssh_runner.go:195] Run: crio config
	I0130 22:14:38.081077  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:38.081112  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:38.081141  680506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:38.081175  680506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-912992 NodeName:old-k8s-version-912992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 22:14:38.082099  680506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-912992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-912992
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.84:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:38.082244  680506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-912992 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:14:38.082342  680506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 22:14:38.091606  680506 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:38.091676  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:38.100424  680506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 22:14:38.117658  680506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:38.134721  680506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 22:14:38.151680  680506 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:38.155416  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:38.169111  680506 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992 for IP: 192.168.39.84
	I0130 22:14:38.169145  680506 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:38.169305  680506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:38.169342  680506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:38.169412  680506 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.key
	I0130 22:14:38.169506  680506 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key.2e1821a6
	I0130 22:14:38.169547  680506 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key
	I0130 22:14:38.169654  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:38.169689  680506 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:38.169702  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:38.169726  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:38.169753  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:38.169776  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:38.169818  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:38.170542  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:38.195046  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:38.217051  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:38.240099  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 22:14:38.266523  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:38.289237  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:38.313011  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:38.336140  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:38.359683  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:38.382658  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:38.407558  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:38.435231  680506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:38.453753  680506 ssh_runner.go:195] Run: openssl version
	I0130 22:14:38.459339  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:38.469159  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474001  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474079  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.479508  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:38.489049  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:38.498644  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503289  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503340  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.508873  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:38.518533  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:38.527871  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532447  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532493  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.538832  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:38.549398  680506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:38.553860  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:38.559537  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:38.565050  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:38.570705  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:38.576386  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:38.581918  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:38.587630  680506 kubeadm.go:404] StartCluster: {Name:old-k8s-version-912992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:38.587746  680506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:38.587803  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:38.630328  680506 cri.go:89] found id: ""
	I0130 22:14:38.630420  680506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:38.642993  680506 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:38.643026  680506 kubeadm.go:636] restartCluster start
	I0130 22:14:38.643095  680506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:38.653192  680506 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:38.654325  680506 kubeconfig.go:92] found "old-k8s-version-912992" server: "https://192.168.39.84:8443"
	I0130 22:14:38.656891  680506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:38.666689  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:38.666762  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:38.678857  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.167457  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.167543  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.179779  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.667279  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.667371  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.679872  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.167509  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.167607  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.181001  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.666977  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.667063  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.679278  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.167767  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.167850  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.182139  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.667595  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.667687  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.681165  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:42.167790  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.167888  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.180444  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.777979  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:40.798593  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:40.826400  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:40.839821  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:40.839847  681007 system_pods.go:61] "coredns-5dd5756b68-t65nr" [1379e1d2-263a-4d35-a630-4e197767b62d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:40.839856  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [e8468358-fd44-4f0e-b54b-13e9a478e259] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:40.839868  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [2e35ea0f-78e5-41b4-965a-c428408f84eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:40.839877  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [669d8c85-812f-4bfc-b3bb-7f5041ca8514] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:40.839890  681007 system_pods.go:61] "kube-proxy-9v5rw" [e97b697b-472b-4b3d-886b-39786c1b3760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:40.839905  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [956ec644-071b-4390-b63e-8cbe9ad2a350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:40.839918  681007 system_pods.go:61] "metrics-server-57f55c9bc5-wlzw4" [3d2bfab3-e9e2-484b-8b8d-779869cbcf9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:40.839927  681007 system_pods.go:61] "storage-provisioner" [e87ce7ad-4933-41b6-8e20-91a4e9ecc45c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:40.839934  681007 system_pods.go:74] duration metric: took 13.512695ms to wait for pod list to return data ...
	I0130 22:14:40.839942  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:40.843711  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:40.843736  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:40.843747  681007 node_conditions.go:105] duration metric: took 3.799992ms to run NodePressure ...
	I0130 22:14:40.843762  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:41.200590  681007 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205872  681007 kubeadm.go:787] kubelet initialised
	I0130 22:14:41.205892  681007 kubeadm.go:788] duration metric: took 5.278409ms waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205899  681007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:41.214192  681007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:43.221105  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.787175  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.243973  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.244009  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.250982  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.751725  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.667181  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.667264  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.679726  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.167750  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.167867  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.179954  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.667584  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.667715  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.680828  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.167107  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.167263  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.183107  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.667674  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.667749  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.680942  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.167589  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.167689  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.180786  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.667715  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.667811  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.681199  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.167671  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.167764  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.181276  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.666810  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.666952  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.680935  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:47.167612  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.167711  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.180385  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.221153  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.221375  681007 pod_ready.go:92] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:47.221398  681007 pod_ready.go:81] duration metric: took 6.00718187s waiting for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:47.221411  681007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:46.244096  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:48.245476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:46.755543  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:49.252337  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.667527  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.667633  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.680519  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.167564  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.167659  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.179815  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.667656  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.667733  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.682679  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.682711  680506 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:48.682722  680506 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:48.682735  680506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:48.682788  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:48.726311  680506 cri.go:89] found id: ""
	I0130 22:14:48.726399  680506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:48.744504  680506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:48.755471  680506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:48.755523  680506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765613  680506 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765636  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:48.886214  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:49.873929  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.090456  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.199471  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.278504  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:50.278604  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:50.779646  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.279488  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.779657  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.829813  680506 api_server.go:72] duration metric: took 1.551314483s to wait for apiserver process to appear ...
	I0130 22:14:51.829852  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:51.829888  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:51.830469  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": dial tcp 192.168.39.84:8443: connect: connection refused
	I0130 22:14:52.330162  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:49.228581  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.230115  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.228169  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.228193  681007 pod_ready.go:81] duration metric: took 6.006776273s waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.228201  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233723  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.233746  681007 pod_ready.go:81] duration metric: took 5.53858ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233754  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238962  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.238983  681007 pod_ready.go:81] duration metric: took 5.221325ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238994  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247623  681007 pod_ready.go:92] pod "kube-proxy-9v5rw" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.247646  681007 pod_ready.go:81] duration metric: took 8.643709ms waiting for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247657  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254079  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.254102  681007 pod_ready.go:81] duration metric: took 6.435694ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254113  681007 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:50.745213  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.245163  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.252956  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.750853  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.331302  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:57.331361  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:55.262286  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.762588  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:55.245641  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.246341  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:58.248157  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.248193  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.248223  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.329248  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.329276  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.330342  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.349249  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.349288  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:58.830998  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.836484  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.836510  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.330646  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.337516  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:59.337559  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.830016  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.836129  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:14:59.846684  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:14:59.846741  680506 api_server.go:131] duration metric: took 8.016878739s to wait for apiserver health ...
	I0130 22:14:59.846760  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:59.846770  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:59.848874  680506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:55.751242  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.755048  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:00.251809  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.850215  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:59.860069  680506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:59.880017  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:59.891300  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:14:59.891330  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:14:59.891335  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:14:59.891340  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:14:59.891345  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Pending
	I0130 22:14:59.891349  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:14:59.891352  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:14:59.891360  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:14:59.891368  680506 system_pods.go:74] duration metric: took 11.331282ms to wait for pod list to return data ...
	I0130 22:14:59.891377  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:59.895522  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:59.895558  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:59.895571  680506 node_conditions.go:105] duration metric: took 4.184167ms to run NodePressure ...
	I0130 22:14:59.895591  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:15:00.214560  680506 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218844  680506 kubeadm.go:787] kubelet initialised
	I0130 22:15:00.218863  680506 kubeadm.go:788] duration metric: took 4.278574ms waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218870  680506 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:00.223310  680506 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.228349  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228371  680506 pod_ready.go:81] duration metric: took 5.033709ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.228380  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228385  680506 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.236353  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236378  680506 pod_ready.go:81] duration metric: took 7.981988ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.236387  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236394  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.244477  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244504  680506 pod_ready.go:81] duration metric: took 8.099653ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.244521  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244531  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.283561  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283590  680506 pod_ready.go:81] duration metric: took 39.047028ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.283602  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283610  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.683495  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683524  680506 pod_ready.go:81] duration metric: took 399.906973ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.683537  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683544  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:01.084061  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084093  680506 pod_ready.go:81] duration metric: took 400.538074ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:01.084107  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084117  680506 pod_ready.go:38] duration metric: took 865.238684ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:01.084149  680506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:15:01.120344  680506 ops.go:34] apiserver oom_adj: -16
	I0130 22:15:01.120372  680506 kubeadm.go:640] restartCluster took 22.477337631s
	I0130 22:15:01.120384  680506 kubeadm.go:406] StartCluster complete in 22.532762257s
	I0130 22:15:01.120408  680506 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.120536  680506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:15:01.123018  680506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.123321  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:15:01.123514  680506 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:15:01.123624  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:15:01.123662  680506 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123683  680506 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123701  680506 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-912992"
	W0130 22:15:01.123709  680506 addons.go:243] addon metrics-server should already be in state true
	I0130 22:15:01.123745  680506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-912992"
	I0130 22:15:01.123769  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124153  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124178  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.124189  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124218  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.123635  680506 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-912992"
	I0130 22:15:01.124295  680506 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-912992"
	W0130 22:15:01.124303  680506 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:15:01.124357  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124693  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124741  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.141006  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0130 22:15:01.141022  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0130 22:15:01.141594  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.141697  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.142122  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142142  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142273  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142297  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142793  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.142837  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0130 22:15:01.142797  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.143291  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.143380  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.143411  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.143758  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.143786  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.144174  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.144210  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.144212  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.144438  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.148328  680506 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-912992"
	W0130 22:15:01.148350  680506 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:15:01.148378  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.148706  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.148734  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.163324  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0130 22:15:01.163720  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0130 22:15:01.164054  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164187  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164638  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164665  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.164806  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164817  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.165086  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165242  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165310  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.165844  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.167686  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.170253  680506 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:15:01.168142  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.169379  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0130 22:15:01.172172  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:15:01.172200  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:15:01.172228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.174608  680506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:15:01.173335  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.175891  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.176824  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.177101  680506 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.177110  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.177116  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:15:01.177134  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.177137  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.177239  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.177855  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.178037  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.181184  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181626  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.181644  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181879  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.182032  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.182215  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.182321  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.182343  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.182745  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.182805  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.183262  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.183296  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.218510  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0130 22:15:01.218955  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.219566  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.219598  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.219976  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.220136  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.221882  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.222143  680506 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.222161  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:15:01.222178  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.225129  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225437  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.225454  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225732  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.225875  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.225948  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.226015  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.362950  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.405756  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:15:01.405829  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:15:01.442804  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.468468  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:15:01.468501  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:15:01.514493  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.514530  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:15:01.531543  680506 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 22:15:01.551886  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.697743  680506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-912992" context rescaled to 1 replicas
	I0130 22:15:01.697805  680506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:15:01.699954  680506 out.go:177] * Verifying Kubernetes components...
	I0130 22:15:01.701746  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078654  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078682  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078704  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078736  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078751  680506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:02.079190  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079200  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079221  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079229  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079231  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079235  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079245  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079246  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079200  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079257  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079266  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079665  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079685  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079695  680506 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-912992"
	I0130 22:15:02.079699  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079719  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.081702  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081725  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.081736  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.081746  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.081969  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081999  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.087366  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.087387  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.087642  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.087661  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.089698  680506 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 22:15:02.091156  680506 addons.go:505] enable addons completed in 967.651598ms: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 22:14:59.767179  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.262656  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.743796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:01.745268  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.245639  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.754252  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:05.250850  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.082265  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:06.582230  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:04.764379  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.764868  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.765839  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.744476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.744978  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.584004  680506 node_ready.go:49] node "old-k8s-version-912992" has status "Ready":"True"
	I0130 22:15:08.584038  680506 node_ready.go:38] duration metric: took 6.50526711s waiting for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:08.584052  680506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:08.591084  680506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595709  680506 pod_ready.go:92] pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.595735  680506 pod_ready.go:81] duration metric: took 4.623355ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595747  680506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600152  680506 pod_ready.go:92] pod "etcd-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.600175  680506 pod_ready.go:81] duration metric: took 4.419847ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600186  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604426  680506 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.604444  680506 pod_ready.go:81] duration metric: took 4.249901ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604454  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608671  680506 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.608685  680506 pod_ready.go:81] duration metric: took 4.224838ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608694  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984275  680506 pod_ready.go:92] pod "kube-proxy-qm7xx" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.984306  680506 pod_ready.go:81] duration metric: took 375.604271ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984321  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384278  680506 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:09.384303  680506 pod_ready.go:81] duration metric: took 399.974439ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384316  680506 pod_ready.go:38] duration metric: took 800.249209ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:09.384331  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:15:09.384383  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:15:09.399639  680506 api_server.go:72] duration metric: took 7.701783762s to wait for apiserver process to appear ...
	I0130 22:15:09.399665  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:15:09.399683  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:15:09.406824  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:15:09.407829  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:15:09.407850  680506 api_server.go:131] duration metric: took 8.177146ms to wait for apiserver health ...
	I0130 22:15:09.407860  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:15:09.584994  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:15:09.585031  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.585039  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.585046  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.585053  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.585059  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.585065  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.585072  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.585080  680506 system_pods.go:74] duration metric: took 177.213093ms to wait for pod list to return data ...
	I0130 22:15:09.585092  680506 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:15:09.784286  680506 default_sa.go:45] found service account: "default"
	I0130 22:15:09.784313  680506 default_sa.go:55] duration metric: took 199.211541ms for default service account to be created ...
	I0130 22:15:09.784322  680506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:15:09.987063  680506 system_pods.go:86] 7 kube-system pods found
	I0130 22:15:09.987094  680506 system_pods.go:89] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.987103  680506 system_pods.go:89] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.987109  680506 system_pods.go:89] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.987114  680506 system_pods.go:89] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.987120  680506 system_pods.go:89] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.987125  680506 system_pods.go:89] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.987131  680506 system_pods.go:89] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.987140  680506 system_pods.go:126] duration metric: took 202.811673ms to wait for k8s-apps to be running ...
	I0130 22:15:09.987150  680506 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:15:09.987206  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:10.001966  680506 system_svc.go:56] duration metric: took 14.805505ms WaitForService to wait for kubelet.
	I0130 22:15:10.001997  680506 kubeadm.go:581] duration metric: took 8.30415043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:15:10.002022  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:15:10.184699  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:15:10.184743  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:15:10.184756  680506 node_conditions.go:105] duration metric: took 182.728475ms to run NodePressure ...
	I0130 22:15:10.184772  680506 start.go:228] waiting for startup goroutines ...
	I0130 22:15:10.184782  680506 start.go:233] waiting for cluster config update ...
	I0130 22:15:10.184796  680506 start.go:242] writing updated cluster config ...
	I0130 22:15:10.185114  680506 ssh_runner.go:195] Run: rm -f paused
	I0130 22:15:10.239744  680506 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 22:15:10.241916  680506 out.go:177] 
	W0130 22:15:10.243307  680506 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 22:15:10.244540  680506 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 22:15:10.245844  680506 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-912992" cluster and "default" namespace by default
	I0130 22:15:07.753442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.250385  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.770107  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.262302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:11.244598  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.744540  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:12.252794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:14.750293  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:15.761573  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:17.764138  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.245719  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.744763  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.751093  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.751144  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:19.766344  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:22.262506  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.243857  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.244633  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.250405  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.752715  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:24.762412  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.260985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:25.744105  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.746611  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:26.250066  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:28.250115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.251911  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:29.262020  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:31.763782  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.243836  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.244064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.244535  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.754073  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:35.249927  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.260099  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.262332  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.262515  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.245173  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.747970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:37.252466  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:39.254833  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:40.264075  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:42.763978  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.244902  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.246545  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.750938  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.751361  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.262599  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.769508  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.743965  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.745769  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:46.250381  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:48.250841  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.262796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.763728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:49.746064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:51.750634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.244634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.750564  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.751105  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.751544  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:55.261060  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:57.262293  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.245111  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:58.246787  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.751681  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.250409  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.762572  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.765901  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:00.744216  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:02.744765  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.750473  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.252199  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.267246  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.764985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:05.252271  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:07.745483  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.252327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:08.750460  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:09.263071  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.764448  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:10.244124  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:12.245643  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.248183  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.254631  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:13.752086  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.262534  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.763532  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.744988  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.746562  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.251554  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.751130  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:19.261302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.262097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.764162  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.243403  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.245825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:20.751443  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.251248  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:26.261011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.263281  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.744554  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:27.744970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.750244  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.249555  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.250246  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.761252  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.762070  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:29.745453  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.243772  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.245396  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.251218  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.752524  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:35.261942  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.264695  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:36.745702  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.244617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.250645  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.251192  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.762454  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.765643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.244956  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.245892  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.750084  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.751479  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:44.262004  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.262160  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.763669  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:45.744222  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:47.745591  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.249746  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.250654  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.252500  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:51.261603  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:53.261672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.244099  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.744215  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.749766  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.750634  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:55.261803  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:57.262915  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.744549  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.745030  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.244809  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.751851  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.258417  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.268254  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.761347  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.761999  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.246996  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.744672  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.750976  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.751083  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:05.763147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.264472  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.244449  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.244796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.250266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.250718  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.761567  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.762159  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.245064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.744572  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.750221  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.750688  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.752051  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:15.261414  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.262083  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.745621  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.243837  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.244825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.250798  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.251873  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.262614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.761873  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.762158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.245432  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.745684  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.750760  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:24.252401  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:25.762960  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.261732  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.246290  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.744375  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.749794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.750363  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:30.262011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:32.762896  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.243646  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.245351  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.251364  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.750995  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.262828  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.763644  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.245530  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.246211  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.752489  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.251704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.261365  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.261786  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:39.745084  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:41.746617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.244143  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.750921  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:45.251115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.262664  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.764196  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.769165  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.744967  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.745930  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:47.751743  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:50.250561  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.261754  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.764405  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.244859  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.744487  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:52.254402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:54.751442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:56.260885  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.261304  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:55.747588  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.244383  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:57.250767  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:59.750343  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.262535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.762755  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.248648  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.744883  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:01.751253  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:03.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:04.763841  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.263079  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:05.244262  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.244758  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.245079  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:06.252399  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:08.750732  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.263723  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.766305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.771997  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.744688  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:14.243700  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:10.751691  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.254909  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.263146  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.764654  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.244291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.250725  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:15.751459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:17.752591  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.251354  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:21.263171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.762025  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.238489  680786 pod_ready.go:81] duration metric: took 4m0.001085938s waiting for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:20.238561  680786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:20.238585  680786 pod_ready.go:38] duration metric: took 4m13.374837351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:20.238635  680786 kubeadm.go:640] restartCluster took 4m32.952408079s
	W0130 22:18:20.238771  680786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:20.238897  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:22.752701  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.743814  680821 pod_ready.go:81] duration metric: took 4m0.000772856s waiting for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:23.743843  680821 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:23.743867  680821 pod_ready.go:38] duration metric: took 4m8.55197109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:23.743901  680821 kubeadm.go:640] restartCluster took 4m27.679173945s
	W0130 22:18:23.743979  680821 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:23.744016  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:25.762818  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:27.766206  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:30.262706  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:32.263895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:33.696118  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.457184259s)
	I0130 22:18:33.696246  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:33.709756  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:33.719095  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:33.727249  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:33.727304  680786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:33.783803  680786 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0130 22:18:33.783934  680786 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:33.947330  680786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:33.947473  680786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:33.947594  680786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:34.185129  680786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:34.186847  680786 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:34.186958  680786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:34.187047  680786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:34.187130  680786 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:34.187254  680786 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:34.187590  680786 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:34.188233  680786 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:34.188591  680786 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:34.189435  680786 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:34.189737  680786 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:34.190284  680786 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:34.190677  680786 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:34.190788  680786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:34.357057  680786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:34.468135  680786 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0130 22:18:34.785137  680786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:34.900902  680786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:34.973785  680786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:34.974693  680786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:34.977481  680786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:37.518038  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.773993992s)
	I0130 22:18:37.518130  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:37.533148  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:37.542965  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:37.552859  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:37.552915  680821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:37.614837  680821 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:18:37.614964  680821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:37.783252  680821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:37.783431  680821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:37.783598  680821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:38.009789  680821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:38.011805  680821 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:38.011921  680821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:38.012010  680821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:38.012140  680821 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:38.012573  680821 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:38.013135  680821 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:38.014103  680821 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:38.015459  680821 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:38.016522  680821 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:38.017879  680821 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:38.018669  680821 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:38.019318  680821 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:38.019416  680821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:38.190496  680821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:38.487122  680821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:38.567485  680821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:38.764572  680821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:38.765081  680821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:38.771540  680821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:34.761686  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:36.763512  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:38.772838  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:34.979275  680786 out.go:204]   - Booting up control plane ...
	I0130 22:18:34.979394  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:34.979502  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:34.979687  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:35.000161  680786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:35.001100  680786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:35.001180  680786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:35.143762  680786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:38.773177  680821 out.go:204]   - Booting up control plane ...
	I0130 22:18:38.773326  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:38.773447  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:38.774160  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:38.793263  680821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:38.793414  680821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:38.793489  680821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:38.942605  680821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:41.263027  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.264305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.147099  680786 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003222 seconds
	I0130 22:18:43.165914  680786 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:43.183810  680786 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:43.729066  680786 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:43.729309  680786 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-023824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:44.247224  680786 kubeadm.go:322] [bootstrap-token] Using token: 8v59zo.bsn08ubvfg01lew3
	I0130 22:18:44.248930  680786 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:44.249075  680786 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:44.256127  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:44.265628  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:44.269906  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:44.278100  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:44.283097  680786 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:44.301902  680786 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:44.542713  680786 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:44.665337  680786 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:44.665367  680786 kubeadm.go:322] 
	I0130 22:18:44.665448  680786 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:44.665463  680786 kubeadm.go:322] 
	I0130 22:18:44.665573  680786 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:44.665583  680786 kubeadm.go:322] 
	I0130 22:18:44.665660  680786 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:44.665761  680786 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:44.665830  680786 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:44.665840  680786 kubeadm.go:322] 
	I0130 22:18:44.665909  680786 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:44.665927  680786 kubeadm.go:322] 
	I0130 22:18:44.665994  680786 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:44.666003  680786 kubeadm.go:322] 
	I0130 22:18:44.666084  680786 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:44.666220  680786 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:44.666324  680786 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:44.666349  680786 kubeadm.go:322] 
	I0130 22:18:44.666456  680786 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:44.666544  680786 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:44.666551  680786 kubeadm.go:322] 
	I0130 22:18:44.666646  680786 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.666764  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:44.666789  680786 kubeadm.go:322] 	--control-plane 
	I0130 22:18:44.666795  680786 kubeadm.go:322] 
	I0130 22:18:44.666898  680786 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:44.666906  680786 kubeadm.go:322] 
	I0130 22:18:44.667000  680786 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.667121  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:44.667741  680786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:44.667773  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:18:44.667784  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:44.669613  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:47.444081  680821 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502071 seconds
	I0130 22:18:47.444241  680821 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:47.470140  680821 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:48.014141  680821 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:48.014385  680821 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-713938 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:48.528168  680821 kubeadm.go:322] [bootstrap-token] Using token: 5j3t7l.lolt26xy60ozf3ca
	I0130 22:18:45.765205  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.261716  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.529669  680821 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:48.529807  680821 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:48.544442  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:48.552536  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:48.555846  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:48.559711  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:48.563810  680821 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:48.580095  680821 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:48.820236  680821 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:48.950911  680821 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:48.951833  680821 kubeadm.go:322] 
	I0130 22:18:48.951927  680821 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:48.951958  680821 kubeadm.go:322] 
	I0130 22:18:48.952042  680821 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:48.952063  680821 kubeadm.go:322] 
	I0130 22:18:48.952089  680821 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:48.952144  680821 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:48.952190  680821 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:48.952196  680821 kubeadm.go:322] 
	I0130 22:18:48.952267  680821 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:48.952287  680821 kubeadm.go:322] 
	I0130 22:18:48.952346  680821 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:48.952356  680821 kubeadm.go:322] 
	I0130 22:18:48.952439  680821 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:48.952554  680821 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:48.952661  680821 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:48.952671  680821 kubeadm.go:322] 
	I0130 22:18:48.952805  680821 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:48.952894  680821 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:48.952906  680821 kubeadm.go:322] 
	I0130 22:18:48.953001  680821 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953139  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:48.953177  680821 kubeadm.go:322] 	--control-plane 
	I0130 22:18:48.953189  680821 kubeadm.go:322] 
	I0130 22:18:48.953296  680821 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:48.953306  680821 kubeadm.go:322] 
	I0130 22:18:48.953413  680821 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953555  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:48.954606  680821 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:48.954659  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:18:48.954677  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:48.956379  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:44.671035  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:44.696043  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:44.785738  680786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:44.785867  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.785894  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=no-preload-023824 minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.887327  680786 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:45.135926  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:45.636755  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.136406  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.636077  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.136080  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.636924  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.136830  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.636945  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.136038  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.957922  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:48.974487  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:49.035551  680821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=embed-certs-713938 minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.085285  680821 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:49.366490  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.866648  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.366789  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.761888  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:52.765352  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:53.254549  681007 pod_ready.go:81] duration metric: took 4m0.000414494s waiting for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:53.254593  681007 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:53.254623  681007 pod_ready.go:38] duration metric: took 4m12.048715105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:53.254662  681007 kubeadm.go:640] restartCluster took 4m34.780590329s
	W0130 22:18:53.254758  681007 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:53.254793  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:49.635946  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.136681  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.636090  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.136427  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.636232  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.136032  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.636639  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.136839  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.636957  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.136140  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.866857  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.367211  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.867291  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.366659  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.867351  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.366925  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.867180  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.366846  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.866651  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.366588  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.636246  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.136047  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.636970  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.136258  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.636239  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.136269  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.262159  680786 kubeadm.go:1088] duration metric: took 12.476361074s to wait for elevateKubeSystemPrivileges.
	I0130 22:18:57.262235  680786 kubeadm.go:406] StartCluster complete in 5m10.025020914s
	I0130 22:18:57.262288  680786 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.262417  680786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:18:57.265204  680786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.265504  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:18:57.265655  680786 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:18:57.265746  680786 addons.go:69] Setting storage-provisioner=true in profile "no-preload-023824"
	I0130 22:18:57.265769  680786 addons.go:234] Setting addon storage-provisioner=true in "no-preload-023824"
	W0130 22:18:57.265784  680786 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:18:57.265774  680786 addons.go:69] Setting default-storageclass=true in profile "no-preload-023824"
	I0130 22:18:57.265812  680786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-023824"
	I0130 22:18:57.265838  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:18:57.265817  680786 addons.go:69] Setting metrics-server=true in profile "no-preload-023824"
	I0130 22:18:57.265880  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.265898  680786 addons.go:234] Setting addon metrics-server=true in "no-preload-023824"
	W0130 22:18:57.265925  680786 addons.go:243] addon metrics-server should already be in state true
	I0130 22:18:57.265973  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266315  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266349  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266376  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266416  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.286273  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0130 22:18:57.286366  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0130 22:18:57.286463  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0130 22:18:57.287691  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287692  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287851  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.288302  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288323  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288428  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288439  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288511  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288524  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288850  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.288897  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289215  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289405  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289437  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289685  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289719  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289792  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.293877  680786 addons.go:234] Setting addon default-storageclass=true in "no-preload-023824"
	W0130 22:18:57.293899  680786 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:18:57.293928  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.294325  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.294356  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.310259  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0130 22:18:57.310765  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.311270  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.311289  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.311818  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.312317  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.313547  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0130 22:18:57.314105  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.314665  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.314686  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.314752  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.316570  680786 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:18:57.315368  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.317812  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:18:57.317835  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:18:57.317858  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.318173  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.318194  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.321603  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.321671  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0130 22:18:57.321961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.322001  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.322280  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.322296  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.322491  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	W0130 22:18:57.322819  680786 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-023824" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0130 22:18:57.322843  680786 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:18:57.322866  680786 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:18:57.324267  680786 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:57.323003  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.323084  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.325567  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.325663  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:57.325909  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.326903  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.327113  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.329169  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.331160  680786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:18:57.332481  680786 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.332500  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:18:57.332519  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.336038  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336525  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.336546  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336746  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.336901  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.337031  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.337256  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.338027  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0130 22:18:57.338387  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.339078  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.339097  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.339406  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.339628  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.341385  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.341687  680786 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.341705  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:18:57.341725  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.344745  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345159  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.345180  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345408  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.345613  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.349708  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.349906  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.525974  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.531582  680786 node_ready.go:35] waiting up to 6m0s for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.532157  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:18:57.546542  680786 node_ready.go:49] node "no-preload-023824" has status "Ready":"True"
	I0130 22:18:57.546575  680786 node_ready.go:38] duration metric: took 14.926402ms waiting for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.546592  680786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:57.573983  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:18:57.589817  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:18:57.589854  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:18:57.684894  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:18:57.684926  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:18:57.715247  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.726490  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:57.726521  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:18:57.824368  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:58.842258  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.316238822s)
	I0130 22:18:58.842310  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842327  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842341  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.310137299s)
	I0130 22:18:58.842386  680786 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0130 22:18:58.842447  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.127164198s)
	I0130 22:18:58.842474  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842486  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842830  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842870  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842893  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842898  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842900  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842921  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842924  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842931  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842937  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842948  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.843222  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843243  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.843456  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843469  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.885944  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.885978  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.886311  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.888268  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.888288  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228029  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.403587938s)
	I0130 22:18:59.228205  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228233  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.228672  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.228714  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.228738  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228749  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228762  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.229119  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.229182  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.229197  680786 addons.go:470] Verifying addon metrics-server=true in "no-preload-023824"
	I0130 22:18:59.229126  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.230815  680786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:18:59.232158  680786 addons.go:505] enable addons completed in 1.966513856s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:18:55.867390  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.367181  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.866689  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.366578  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.867406  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.366702  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.867537  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.366860  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.867263  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.366507  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.866976  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.994251  680821 kubeadm.go:1088] duration metric: took 11.958653294s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:00.994309  680821 kubeadm.go:406] StartCluster complete in 5m4.981146882s
	I0130 22:19:00.994337  680821 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.994437  680821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:00.997310  680821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.997649  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:00.997866  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:00.997819  680821 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:00.997932  680821 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-713938"
	I0130 22:19:00.997951  680821 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-713938"
	W0130 22:19:00.997962  680821 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:00.997978  680821 addons.go:69] Setting metrics-server=true in profile "embed-certs-713938"
	I0130 22:19:00.997979  680821 addons.go:69] Setting default-storageclass=true in profile "embed-certs-713938"
	I0130 22:19:00.997994  680821 addons.go:234] Setting addon metrics-server=true in "embed-certs-713938"
	W0130 22:19:00.998002  680821 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:00.998009  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998012  680821 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-713938"
	I0130 22:19:00.998035  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998425  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998450  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.018726  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0130 22:19:01.018744  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0130 22:19:01.018754  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0130 22:19:01.019224  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019255  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019329  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019860  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.019890  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020012  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020062  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.020311  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020379  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020530  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.020984  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.021001  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021030  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.021533  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021581  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.024902  680821 addons.go:234] Setting addon default-storageclass=true in "embed-certs-713938"
	W0130 22:19:01.024926  680821 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:01.024955  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:01.025333  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.025372  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.041760  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0130 22:19:01.043510  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0130 22:19:01.043937  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.043980  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.044434  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044454  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.044864  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044902  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.045102  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045331  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045686  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.045730  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.045952  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.049065  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0130 22:19:01.049076  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.051101  680821 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:01.049716  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.052918  680821 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.052937  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:01.052959  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.055109  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.055135  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.057586  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.057591  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057611  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.057625  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057656  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.057829  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.057831  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.057974  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.058123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.063470  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.065048  680821 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:01.066385  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:01.066404  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:01.066425  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.066427  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I0130 22:19:01.067271  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.067806  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.067834  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.068198  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.068403  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.069684  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070069  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.070133  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.070162  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070347  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.070369  680821 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.070381  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:01.070402  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.073308  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073914  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.073945  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073978  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074155  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074207  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.074325  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.074346  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074441  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074534  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.210631  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.237088  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.307032  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:01.307130  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:01.368366  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:01.368405  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:01.388184  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:01.443355  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.443414  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:01.558399  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.610498  680821 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-713938" context rescaled to 1 replicas
	I0130 22:19:01.610545  680821 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:01.612750  680821 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:59.584739  680786 pod_ready.go:102] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:01.089751  680786 pod_ready.go:92] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.089826  680786 pod_ready.go:81] duration metric: took 3.515759187s waiting for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.089853  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098560  680786 pod_ready.go:92] pod "coredns-76f75df574-znj8f" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.098645  680786 pod_ready.go:81] duration metric: took 8.774285ms waiting for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098671  680786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.106943  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.107036  680786 pod_ready.go:81] duration metric: took 8.345837ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.107062  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120384  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.120413  680786 pod_ready.go:81] duration metric: took 13.332445ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120427  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129739  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.129825  680786 pod_ready.go:81] duration metric: took 9.387442ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129850  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282077  680786 pod_ready.go:92] pod "kube-proxy-8rn6v" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.282110  680786 pod_ready.go:81] duration metric: took 1.152243055s waiting for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282123  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681191  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.681221  680786 pod_ready.go:81] duration metric: took 399.089453ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681232  680786 pod_ready.go:38] duration metric: took 5.134627161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:02.681249  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:19:02.681313  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:19:02.695239  680786 api_server.go:72] duration metric: took 5.372338357s to wait for apiserver process to appear ...
	I0130 22:19:02.695265  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:19:02.695291  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:19:02.700070  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:19:02.701235  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:19:02.701266  680786 api_server.go:131] duration metric: took 5.988974ms to wait for apiserver health ...
	I0130 22:19:02.701279  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:19:02.899520  680786 system_pods.go:59] 9 kube-system pods found
	I0130 22:19:02.899558  680786 system_pods.go:61] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:02.899565  680786 system_pods.go:61] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:02.899572  680786 system_pods.go:61] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:02.899579  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:02.899586  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:02.899592  680786 system_pods.go:61] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:02.899599  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:02.899610  680786 system_pods.go:61] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:02.899626  680786 system_pods.go:61] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:02.899637  680786 system_pods.go:74] duration metric: took 198.349705ms to wait for pod list to return data ...
	I0130 22:19:02.899649  680786 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:19:03.080624  680786 default_sa.go:45] found service account: "default"
	I0130 22:19:03.080668  680786 default_sa.go:55] duration metric: took 181.003649ms for default service account to be created ...
	I0130 22:19:03.080681  680786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:19:03.285004  680786 system_pods.go:86] 9 kube-system pods found
	I0130 22:19:03.285040  680786 system_pods.go:89] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:03.285048  680786 system_pods.go:89] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:03.285056  680786 system_pods.go:89] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:03.285063  680786 system_pods.go:89] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:03.285069  680786 system_pods.go:89] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:03.285073  680786 system_pods.go:89] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:03.285078  680786 system_pods.go:89] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:03.285089  680786 system_pods.go:89] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:03.285097  680786 system_pods.go:89] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:03.285107  680786 system_pods.go:126] duration metric: took 204.418927ms to wait for k8s-apps to be running ...
	I0130 22:19:03.285117  680786 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:19:03.285172  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.303077  680786 system_svc.go:56] duration metric: took 17.949308ms WaitForService to wait for kubelet.
	I0130 22:19:03.303108  680786 kubeadm.go:581] duration metric: took 5.980212644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:19:03.303133  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:19:03.481755  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:19:03.481794  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:19:03.481804  680786 node_conditions.go:105] duration metric: took 178.666283ms to run NodePressure ...
	I0130 22:19:03.481816  680786 start.go:228] waiting for startup goroutines ...
	I0130 22:19:03.481822  680786 start.go:233] waiting for cluster config update ...
	I0130 22:19:03.481860  680786 start.go:242] writing updated cluster config ...
	I0130 22:19:03.482145  680786 ssh_runner.go:195] Run: rm -f paused
	I0130 22:19:03.549733  680786 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 22:19:03.551653  680786 out.go:177] * Done! kubectl is now configured to use "no-preload-023824" cluster and "default" namespace by default
	I0130 22:19:01.614025  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.810450  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.573311695s)
	I0130 22:19:03.810519  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810531  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810592  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599920536s)
	I0130 22:19:03.810625  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422412443s)
	I0130 22:19:03.810639  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810653  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810640  680821 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:03.811010  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811010  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811035  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811034  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811038  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811045  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811055  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811056  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811065  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811074  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811299  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811317  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811626  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811677  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811686  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838002  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.838036  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.838339  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.838364  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838384  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842042  680821 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.227988129s)
	I0130 22:19:03.842085  680821 node_ready.go:35] waiting up to 6m0s for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.842321  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.283887868s)
	I0130 22:19:03.842355  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842369  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.842728  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842753  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.842761  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.842772  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842784  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.843015  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.843031  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.843042  680821 addons.go:470] Verifying addon metrics-server=true in "embed-certs-713938"
	I0130 22:19:03.844872  680821 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:03.846361  680821 addons.go:505] enable addons completed in 2.848549166s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:03.857259  680821 node_ready.go:49] node "embed-certs-713938" has status "Ready":"True"
	I0130 22:19:03.857281  680821 node_ready.go:38] duration metric: took 15.183316ms waiting for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.857290  680821 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:03.880136  680821 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392506  680821 pod_ready.go:92] pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.392542  680821 pod_ready.go:81] duration metric: took 1.512370879s waiting for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392556  680821 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402272  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.402382  680821 pod_ready.go:81] duration metric: took 9.816254ms waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402410  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414813  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.414844  680821 pod_ready.go:81] duration metric: took 12.42049ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414861  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424628  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.424651  680821 pod_ready.go:81] duration metric: took 9.782ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424660  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445652  680821 pod_ready.go:92] pod "kube-proxy-f7mgv" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.445679  680821 pod_ready.go:81] duration metric: took 21.012459ms waiting for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445692  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.459758  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.204942723s)
	I0130 22:19:07.459833  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:07.475749  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:19:07.487056  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:19:07.498268  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:19:07.498316  681007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:19:07.552393  681007 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:19:07.552482  681007 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:19:07.703415  681007 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:19:07.703558  681007 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:19:07.703688  681007 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:19:07.929127  681007 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:19:07.931129  681007 out.go:204]   - Generating certificates and keys ...
	I0130 22:19:07.931256  681007 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:19:07.931340  681007 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:19:07.931443  681007 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:19:07.931568  681007 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:19:07.931907  681007 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:19:07.933061  681007 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:19:07.934226  681007 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:19:07.935564  681007 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:19:07.936846  681007 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:19:07.938253  681007 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:19:07.939205  681007 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:19:07.939281  681007 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:19:08.017218  681007 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:19:08.179939  681007 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:19:08.390089  681007 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:19:08.500690  681007 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:19:08.501201  681007 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:19:08.506551  681007 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:19:08.508442  681007 out.go:204]   - Booting up control plane ...
	I0130 22:19:08.508554  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:19:08.508643  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:19:08.509176  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:19:08.528978  681007 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:19:08.529909  681007 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:19:08.530016  681007 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:19:08.657813  681007 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:19:05.846282  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.846316  680821 pod_ready.go:81] duration metric: took 400.615309ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.846329  680821 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.854210  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:10.354894  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:12.358737  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:14.361808  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:16.661056  681007 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003483 seconds
	I0130 22:19:16.663313  681007 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:19:16.682919  681007 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:19:17.218185  681007 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:19:17.218446  681007 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-850803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:19:17.733745  681007 kubeadm.go:322] [bootstrap-token] Using token: oi6eg1.osding0t7oyyeu0p
	I0130 22:19:17.735211  681007 out.go:204]   - Configuring RBAC rules ...
	I0130 22:19:17.735388  681007 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:19:17.744899  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:19:17.754341  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:19:17.758107  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:19:17.761508  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:19:17.765503  681007 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:19:17.781414  681007 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:19:18.095502  681007 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:19:18.190245  681007 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:19:18.190272  681007 kubeadm.go:322] 
	I0130 22:19:18.190348  681007 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:19:18.190360  681007 kubeadm.go:322] 
	I0130 22:19:18.190452  681007 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:19:18.190461  681007 kubeadm.go:322] 
	I0130 22:19:18.190493  681007 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:19:18.190604  681007 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:19:18.190702  681007 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:19:18.190716  681007 kubeadm.go:322] 
	I0130 22:19:18.190800  681007 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:19:18.190835  681007 kubeadm.go:322] 
	I0130 22:19:18.190892  681007 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:19:18.190906  681007 kubeadm.go:322] 
	I0130 22:19:18.190976  681007 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:19:18.191074  681007 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:19:18.191178  681007 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:19:18.191191  681007 kubeadm.go:322] 
	I0130 22:19:18.191293  681007 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:19:18.191416  681007 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:19:18.191438  681007 kubeadm.go:322] 
	I0130 22:19:18.191544  681007 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.191672  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:19:18.191703  681007 kubeadm.go:322] 	--control-plane 
	I0130 22:19:18.191714  681007 kubeadm.go:322] 
	I0130 22:19:18.191814  681007 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:19:18.191824  681007 kubeadm.go:322] 
	I0130 22:19:18.191936  681007 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.192085  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:19:18.192660  681007 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:19:18.192684  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:19:18.192692  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:19:18.194376  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:19:18.195608  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:19:18.244311  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:19:18.285107  681007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:19:18.285193  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.285210  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=default-k8s-diff-port-850803 minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.682930  681007 ops.go:34] apiserver oom_adj: -16
	I0130 22:19:18.683119  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:16.854674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:18.854723  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:19.184109  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:19.683715  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.183529  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.684197  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.184124  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.684022  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.184033  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.683812  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.184203  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.683513  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.857387  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:23.354163  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:25.354683  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:24.184064  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:24.683177  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.183896  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.683522  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.183779  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.683891  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.183468  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.683878  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.183471  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.683793  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.853744  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:30.356959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:29.183658  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:29.683264  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.183311  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.683828  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.183841  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.287952  681007 kubeadm.go:1088] duration metric: took 13.002835585s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:31.287988  681007 kubeadm.go:406] StartCluster complete in 5m12.874624935s
	I0130 22:19:31.288014  681007 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.288132  681007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:31.290435  681007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.290772  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:31.290924  681007 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:31.291004  681007 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291027  681007 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291024  681007 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850803"
	W0130 22:19:31.291035  681007 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:31.291044  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:31.291048  681007 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291053  681007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850803"
	I0130 22:19:31.291078  681007 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291084  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	W0130 22:19:31.291089  681007 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:31.291142  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.291497  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291528  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291577  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291578  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.308624  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0130 22:19:31.308641  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0130 22:19:31.308628  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0130 22:19:31.309140  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309143  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309231  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309662  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309683  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309807  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309825  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309829  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309837  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.310304  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310324  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310621  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.310944  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.310983  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.311193  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.311237  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.314600  681007 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-850803"
	W0130 22:19:31.314619  681007 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:31.314641  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.314888  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.314923  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.331266  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0130 22:19:31.331358  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0130 22:19:31.332259  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332277  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332769  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332791  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.332930  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332949  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.333243  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333307  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333459  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.333534  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.335458  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.337520  681007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:31.335819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.338601  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0130 22:19:31.338925  681007 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.338944  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:31.338969  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.340850  681007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:31.339883  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.341794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.342314  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.342344  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:31.342364  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:31.342381  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.342456  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.342572  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.342787  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.342807  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.342806  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.343515  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.344047  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.344096  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.345163  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346044  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.346073  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346341  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.346515  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.346617  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.346703  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.360658  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0130 22:19:31.361009  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.361631  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.361653  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.362059  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.362284  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.363819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.364079  681007 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.364091  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:31.364104  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.367056  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367482  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.367508  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367705  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.367877  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.368024  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.368159  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.486668  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:31.512324  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.548212  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:31.548241  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:31.565423  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.607291  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:31.607318  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:31.647162  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.647192  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:31.723006  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.913300  681007 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850803" context rescaled to 1 replicas
	I0130 22:19:31.913355  681007 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:31.915323  681007 out.go:177] * Verifying Kubernetes components...
	I0130 22:19:31.916700  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:33.003770  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.517052198s)
	I0130 22:19:33.003803  681007 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:33.533121  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020753837s)
	I0130 22:19:33.533193  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533208  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533167  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967690921s)
	I0130 22:19:33.533306  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533322  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533714  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533727  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533728  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533738  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533747  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533745  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533759  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533769  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533802  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533973  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533987  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.535503  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.535515  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.535531  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.628879  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.628911  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.629222  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.629249  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.629251  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.742264  681007 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.825530161s)
	I0130 22:19:33.742301  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.019251933s)
	I0130 22:19:33.742328  681007 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.742355  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742371  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.742681  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.742701  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.742712  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742736  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.743035  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.743058  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.743072  681007 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:33.745046  681007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:33.746494  681007 addons.go:505] enable addons completed in 2.455579767s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:33.792738  681007 node_ready.go:49] node "default-k8s-diff-port-850803" has status "Ready":"True"
	I0130 22:19:33.792765  681007 node_ready.go:38] duration metric: took 50.422631ms waiting for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.792774  681007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:33.814090  681007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:32.853930  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.854970  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.821685  681007 pod_ready.go:92] pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.821713  681007 pod_ready.go:81] duration metric: took 1.007586687s waiting for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.821725  681007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827824  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.827846  681007 pod_ready.go:81] duration metric: took 6.114329ms waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827855  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835557  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.835577  681007 pod_ready.go:81] duration metric: took 7.716283ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835586  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846707  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.846730  681007 pod_ready.go:81] duration metric: took 11.137144ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846742  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855583  681007 pod_ready.go:92] pod "kube-proxy-9b97q" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:35.855607  681007 pod_ready.go:81] duration metric: took 1.00885903s waiting for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855616  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146642  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:36.146669  681007 pod_ready.go:81] duration metric: took 291.044646ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146679  681007 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:38.154183  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:37.354609  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:39.854928  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:40.154641  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:42.159531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:41.855320  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.354523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.654954  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:47.154579  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:46.355021  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:48.853459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:49.653829  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:51.655608  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:50.853891  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:52.854695  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:55.354018  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:54.154453  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:56.155065  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:58.657247  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:57.853975  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:00.354902  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:01.153907  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:03.654237  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:02.854731  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:05.356880  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:06.155143  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:08.155296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:07.856132  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.356464  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.155799  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.654333  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.853942  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.354885  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.154056  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.154535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.853402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:20.353980  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:19.655422  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.154392  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.354117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.355044  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.155171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.655471  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.854532  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.354204  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.154677  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.654466  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.356403  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:33.356906  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:34.154078  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:36.654298  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:35.853262  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:37.857523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:40.354097  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:39.154049  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:41.654457  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:43.654895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:42.355195  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:44.854639  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:45.655775  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:48.155289  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:47.357754  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:49.855799  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:50.155498  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.655409  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.353449  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:54.354453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:55.155034  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:57.654844  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:56.354612  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:58.854992  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:59.655694  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.656577  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.353141  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:03.353830  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:04.154299  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:06.654312  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.654807  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:05.854650  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.353951  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.354031  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.655061  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.655432  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.354994  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:14.855265  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:15.159097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:17.653783  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:16.857702  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.359396  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.655858  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:22.156091  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:21.854394  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.354360  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.655296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:27.158080  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:26.855014  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.356117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.653580  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:32.154606  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:31.854704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.355484  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.654068  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.654158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.654269  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.357452  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.855223  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:40.655689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.154796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:41.354371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.854228  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:45.155130  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:47.155889  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:46.355266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:48.355485  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:50.362578  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:49.653701  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:51.655019  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:52.854642  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:55.353605  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:54.154411  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:56.654614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:58.660728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:57.854182  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:00.354287  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:01.155135  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:03.654733  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:02.853711  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:04.854845  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:05.656121  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:08.154541  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:07.353888  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:09.354542  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:10.653671  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:12.657917  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:11.854575  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:14.354327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:15.157012  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:17.158822  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:16.354558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:18.355214  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:19.655591  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.154262  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:20.855145  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.855595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:25.354646  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:24.654590  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:26.655050  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:27.357453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.854619  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.154225  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.156000  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:33.654263  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.855106  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:34.354611  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:35.654550  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:37.654631  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:36.856135  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.354424  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.655008  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.657897  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.659483  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.354687  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.354978  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:46.154172  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:48.154643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:45.853374  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:47.854345  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.353899  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.655054  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.655335  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.354795  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.853217  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.655525  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:57.153994  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:56.856987  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.353446  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.157129  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.655835  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.657302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.355499  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.356368  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:06.154373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:08.654373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854404  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854432  680821 pod_ready.go:81] duration metric: took 4m0.008096056s waiting for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:05.854442  680821 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:05.854449  680821 pod_ready.go:38] duration metric: took 4m1.997150293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:05.854467  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:05.854502  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:05.854561  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:05.929032  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:05.929061  680821 cri.go:89] found id: ""
	I0130 22:23:05.929073  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:05.929137  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.934693  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:05.934777  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:05.982312  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:05.982342  680821 cri.go:89] found id: ""
	I0130 22:23:05.982352  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:05.982417  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.986932  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:05.986988  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:06.031983  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.032007  680821 cri.go:89] found id: ""
	I0130 22:23:06.032015  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:06.032073  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.036373  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:06.036429  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:06.084796  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.084829  680821 cri.go:89] found id: ""
	I0130 22:23:06.084840  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:06.084908  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.089120  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:06.089185  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:06.139977  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.139998  680821 cri.go:89] found id: ""
	I0130 22:23:06.140006  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:06.140063  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.144088  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:06.144147  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:06.185075  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.185103  680821 cri.go:89] found id: ""
	I0130 22:23:06.185113  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:06.185164  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.189014  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:06.189070  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:06.223430  680821 cri.go:89] found id: ""
	I0130 22:23:06.223459  680821 logs.go:284] 0 containers: []
	W0130 22:23:06.223469  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:06.223477  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:06.223529  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:06.260048  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.260071  680821 cri.go:89] found id: ""
	I0130 22:23:06.260083  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:06.260141  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.263987  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:06.264013  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:06.315899  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:06.315930  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:06.366903  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:06.366935  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.406395  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:06.406429  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.445937  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:06.445967  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:06.507335  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:06.507368  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.559276  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:06.559313  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.618349  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:06.618390  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.660376  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:06.660410  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:07.080461  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:07.080504  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:07.153607  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.153767  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.176441  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:07.176475  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:07.191016  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:07.191045  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:07.338888  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.338919  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:07.339094  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:07.339109  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.339121  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.339129  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.339142  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:10.656229  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:13.154689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:15.156258  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.654584  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.340518  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:17.358757  680821 api_server.go:72] duration metric: took 4m15.748181205s to wait for apiserver process to appear ...
	I0130 22:23:17.358785  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:17.358824  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:17.358882  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:17.402796  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:17.402819  680821 cri.go:89] found id: ""
	I0130 22:23:17.402827  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:17.402878  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.408452  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:17.408525  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:17.454148  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.454174  680821 cri.go:89] found id: ""
	I0130 22:23:17.454185  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:17.454260  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.458375  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:17.458450  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:17.508924  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:17.508953  680821 cri.go:89] found id: ""
	I0130 22:23:17.508960  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:17.509011  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.512833  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:17.512900  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:17.556821  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:17.556849  680821 cri.go:89] found id: ""
	I0130 22:23:17.556857  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:17.556913  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.561605  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:17.561666  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:17.604962  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.604991  680821 cri.go:89] found id: ""
	I0130 22:23:17.605001  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:17.605078  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.611321  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:17.611395  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:17.651827  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:17.651860  680821 cri.go:89] found id: ""
	I0130 22:23:17.651869  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:17.651918  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.656414  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:17.656472  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:17.696085  680821 cri.go:89] found id: ""
	I0130 22:23:17.696120  680821 logs.go:284] 0 containers: []
	W0130 22:23:17.696130  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:17.696139  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:17.696197  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:17.742145  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.742171  680821 cri.go:89] found id: ""
	I0130 22:23:17.742183  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:17.742248  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.746837  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:17.746861  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:17.864654  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:17.864691  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.917753  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:17.917785  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.958876  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:17.958914  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.997774  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:17.997811  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:18.047778  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:18.047823  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:18.111572  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:18.111621  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:18.489601  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:18.489683  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:18.549905  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:18.549953  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:18.631865  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.632060  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.656777  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:18.656813  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:18.670944  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:18.670973  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:18.726388  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:18.726424  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:18.766317  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766350  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:18.766427  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:18.766446  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.766460  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.766473  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766485  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:20.155531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:22.654846  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:25.153520  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:27.158571  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:28.767516  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:23:28.774562  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:23:28.775796  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:28.775824  680821 api_server.go:131] duration metric: took 11.417031075s to wait for apiserver health ...
	I0130 22:23:28.775834  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:28.775860  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:28.775909  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:28.821439  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:28.821462  680821 cri.go:89] found id: ""
	I0130 22:23:28.821490  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:28.821556  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.826438  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:28.826495  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:28.870075  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:28.870104  680821 cri.go:89] found id: ""
	I0130 22:23:28.870113  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:28.870169  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.874672  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:28.874741  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:28.917733  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:28.917761  680821 cri.go:89] found id: ""
	I0130 22:23:28.917775  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:28.917835  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.925522  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:28.925586  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:28.979761  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:28.979793  680821 cri.go:89] found id: ""
	I0130 22:23:28.979803  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:28.979866  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.983990  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:28.984044  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:29.022516  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.022543  680821 cri.go:89] found id: ""
	I0130 22:23:29.022553  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:29.022604  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.026989  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:29.027069  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:29.065167  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.065194  680821 cri.go:89] found id: ""
	I0130 22:23:29.065205  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:29.065268  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.069436  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:29.069512  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:29.109503  680821 cri.go:89] found id: ""
	I0130 22:23:29.109532  680821 logs.go:284] 0 containers: []
	W0130 22:23:29.109539  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:29.109546  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:29.109599  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:29.158319  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:29.158343  680821 cri.go:89] found id: ""
	I0130 22:23:29.158350  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:29.158437  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.163004  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:29.163025  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:29.540158  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:29.540203  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:29.616783  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:29.616947  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:29.638172  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:29.638207  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:29.761562  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:29.761613  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:29.803930  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:29.803976  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:29.866722  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:29.866763  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.912093  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:29.912125  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.970591  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:29.970624  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:29.984722  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:29.984748  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:30.040548  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:30.040589  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:30.089982  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:30.090027  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:30.128235  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:30.128267  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:30.169872  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.169906  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:30.169982  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:30.169997  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:30.170008  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:30.170026  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.170035  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:29.653518  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:32.155147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:34.653672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:36.155187  681007 pod_ready.go:81] duration metric: took 4m0.008494222s waiting for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:36.155214  681007 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:36.155224  681007 pod_ready.go:38] duration metric: took 4m2.362439314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:36.155243  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:36.155283  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:36.155343  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:36.205838  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:36.205866  681007 cri.go:89] found id: ""
	I0130 22:23:36.205875  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:36.205945  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.210477  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:36.210558  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:36.253110  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:36.253139  681007 cri.go:89] found id: ""
	I0130 22:23:36.253148  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:36.253204  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.257054  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:36.257124  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:36.296932  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.296959  681007 cri.go:89] found id: ""
	I0130 22:23:36.296971  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:36.297034  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.301030  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:36.301080  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:36.339966  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:36.339992  681007 cri.go:89] found id: ""
	I0130 22:23:36.340002  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:36.340062  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.345411  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:36.345474  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:36.389010  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.389031  681007 cri.go:89] found id: ""
	I0130 22:23:36.389039  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:36.389091  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.392885  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:36.392969  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:36.430208  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:36.430228  681007 cri.go:89] found id: ""
	I0130 22:23:36.430237  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:36.430282  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.434507  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:36.434562  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:36.483517  681007 cri.go:89] found id: ""
	I0130 22:23:36.483542  681007 logs.go:284] 0 containers: []
	W0130 22:23:36.483549  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:36.483555  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:36.483613  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:36.543345  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:36.543370  681007 cri.go:89] found id: ""
	I0130 22:23:36.543380  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:36.543445  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.548033  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:36.548064  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:36.630123  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630304  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630456  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630629  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:36.651951  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:36.651990  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:36.667227  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:36.667261  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:36.815056  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:36.815097  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.856960  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:36.856992  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.903856  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:36.903909  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:37.318919  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:37.318964  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:37.368999  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:37.369037  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:37.412698  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:37.412727  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:37.459356  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:37.459389  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:37.509418  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:37.509454  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:37.551349  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:37.551392  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:37.597863  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597892  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:37.597945  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:37.597958  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597964  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597976  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597982  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:37.597988  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597998  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:40.180631  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:23:40.180660  680821 system_pods.go:61] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.180665  680821 system_pods.go:61] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.180669  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.180674  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.180678  680821 system_pods.go:61] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.180683  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.180693  680821 system_pods.go:61] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.180701  680821 system_pods.go:61] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.180710  680821 system_pods.go:74] duration metric: took 11.404869748s to wait for pod list to return data ...
	I0130 22:23:40.180749  680821 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:23:40.184327  680821 default_sa.go:45] found service account: "default"
	I0130 22:23:40.184349  680821 default_sa.go:55] duration metric: took 3.590968ms for default service account to be created ...
	I0130 22:23:40.184356  680821 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:23:40.194745  680821 system_pods.go:86] 8 kube-system pods found
	I0130 22:23:40.194769  680821 system_pods.go:89] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.194774  680821 system_pods.go:89] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.194779  680821 system_pods.go:89] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.194784  680821 system_pods.go:89] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.194788  680821 system_pods.go:89] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.194791  680821 system_pods.go:89] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.194800  680821 system_pods.go:89] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.194805  680821 system_pods.go:89] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.194812  680821 system_pods.go:126] duration metric: took 10.451241ms to wait for k8s-apps to be running ...
	I0130 22:23:40.194817  680821 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:23:40.194866  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:23:40.214067  680821 system_svc.go:56] duration metric: took 19.241185ms WaitForService to wait for kubelet.
	I0130 22:23:40.214091  680821 kubeadm.go:581] duration metric: took 4m38.603520566s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:23:40.214134  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:23:40.217725  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:23:40.217791  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:23:40.217812  680821 node_conditions.go:105] duration metric: took 3.672364ms to run NodePressure ...
	I0130 22:23:40.217827  680821 start.go:228] waiting for startup goroutines ...
	I0130 22:23:40.217840  680821 start.go:233] waiting for cluster config update ...
	I0130 22:23:40.217857  680821 start.go:242] writing updated cluster config ...
	I0130 22:23:40.218114  680821 ssh_runner.go:195] Run: rm -f paused
	I0130 22:23:40.275722  680821 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:23:40.278571  680821 out.go:177] * Done! kubectl is now configured to use "embed-certs-713938" cluster and "default" namespace by default
	I0130 22:23:47.599324  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:47.615605  681007 api_server.go:72] duration metric: took 4m15.702208866s to wait for apiserver process to appear ...
	I0130 22:23:47.615630  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:47.615671  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:47.615722  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:47.660944  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:47.660980  681007 cri.go:89] found id: ""
	I0130 22:23:47.660997  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:47.661051  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.666115  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:47.666180  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:47.709726  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:47.709750  681007 cri.go:89] found id: ""
	I0130 22:23:47.709760  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:47.709821  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.714636  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:47.714691  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:47.760216  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:47.760245  681007 cri.go:89] found id: ""
	I0130 22:23:47.760262  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:47.760323  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.765395  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:47.765450  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:47.815572  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:47.815604  681007 cri.go:89] found id: ""
	I0130 22:23:47.815614  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:47.815674  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.819670  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:47.819729  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:47.858767  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:47.858795  681007 cri.go:89] found id: ""
	I0130 22:23:47.858805  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:47.858865  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.863151  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:47.863276  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:47.911294  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:47.911319  681007 cri.go:89] found id: ""
	I0130 22:23:47.911327  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:47.911387  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.915772  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:47.915852  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:47.952096  681007 cri.go:89] found id: ""
	I0130 22:23:47.952125  681007 logs.go:284] 0 containers: []
	W0130 22:23:47.952136  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:47.952144  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:47.952229  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:47.990137  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:47.990162  681007 cri.go:89] found id: ""
	I0130 22:23:47.990170  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:47.990228  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.994880  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:47.994902  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:48.068521  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068700  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068849  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.069010  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.091781  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:48.091820  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:48.213688  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:48.213724  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:48.264200  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:48.264234  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:48.319751  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:48.319785  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:48.357815  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:48.357846  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:48.406822  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:48.406858  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:48.419822  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:48.419852  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:48.471685  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:48.471719  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:48.508040  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:48.508088  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:48.559268  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:48.559302  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:48.609976  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:48.610007  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:48.966774  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966810  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:48.966900  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:48.966912  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966919  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966927  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966934  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.966939  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966945  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:58.967938  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:23:58.973850  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:23:58.975689  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:58.975713  681007 api_server.go:131] duration metric: took 11.360076324s to wait for apiserver health ...
	I0130 22:23:58.975720  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:58.975745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:58.975793  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:59.023436  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:59.023458  681007 cri.go:89] found id: ""
	I0130 22:23:59.023466  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:59.023514  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.027855  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:59.027916  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:59.067167  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:59.067194  681007 cri.go:89] found id: ""
	I0130 22:23:59.067204  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:59.067266  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.076124  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:59.076191  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:59.115918  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:59.115947  681007 cri.go:89] found id: ""
	I0130 22:23:59.115956  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:59.116011  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.120440  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:59.120489  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:59.165157  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.165185  681007 cri.go:89] found id: ""
	I0130 22:23:59.165194  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:59.165254  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.169774  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:59.169845  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:59.230609  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:59.230640  681007 cri.go:89] found id: ""
	I0130 22:23:59.230650  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:59.230713  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.235563  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:59.235653  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:59.279835  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.279866  681007 cri.go:89] found id: ""
	I0130 22:23:59.279886  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:59.279954  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.284745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:59.284809  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:59.331328  681007 cri.go:89] found id: ""
	I0130 22:23:59.331361  681007 logs.go:284] 0 containers: []
	W0130 22:23:59.331374  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:59.331380  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:59.331432  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:59.370468  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.370493  681007 cri.go:89] found id: ""
	I0130 22:23:59.370501  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:59.370553  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.375047  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:59.375075  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.428263  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:59.428297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.495321  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:59.495356  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.537553  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:59.537590  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:59.915651  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:59.915691  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:59.930178  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:59.930209  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:24:00.070621  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:24:00.070656  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:24:00.111617  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:24:00.111655  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:24:00.156067  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:24:00.156104  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:24:00.206264  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:24:00.206292  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:24:00.282212  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282436  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282642  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282805  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.304194  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:24:00.304223  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:24:00.355473  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:24:00.355508  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:24:00.402962  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403001  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:24:00.403077  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:24:00.403092  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403101  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403114  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403124  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.403136  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403144  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:24:10.411200  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:24:10.411225  681007 system_pods.go:61] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.411231  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.411235  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.411239  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.411242  681007 system_pods.go:61] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.411246  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.411252  681007 system_pods.go:61] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.411258  681007 system_pods.go:61] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.411264  681007 system_pods.go:74] duration metric: took 11.435539762s to wait for pod list to return data ...
	I0130 22:24:10.411274  681007 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:24:10.413887  681007 default_sa.go:45] found service account: "default"
	I0130 22:24:10.413915  681007 default_sa.go:55] duration metric: took 2.635544ms for default service account to be created ...
	I0130 22:24:10.413923  681007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:24:10.420235  681007 system_pods.go:86] 8 kube-system pods found
	I0130 22:24:10.420256  681007 system_pods.go:89] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.420263  681007 system_pods.go:89] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.420271  681007 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.420281  681007 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.420290  681007 system_pods.go:89] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.420301  681007 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.420311  681007 system_pods.go:89] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.420319  681007 system_pods.go:89] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.420327  681007 system_pods.go:126] duration metric: took 6.398195ms to wait for k8s-apps to be running ...
	I0130 22:24:10.420335  681007 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:24:10.420386  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:24:10.438372  681007 system_svc.go:56] duration metric: took 18.027152ms WaitForService to wait for kubelet.
	I0130 22:24:10.438396  681007 kubeadm.go:581] duration metric: took 4m38.525004918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:24:10.438424  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:24:10.441514  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:24:10.441561  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:24:10.441572  681007 node_conditions.go:105] duration metric: took 3.14294ms to run NodePressure ...
	I0130 22:24:10.441583  681007 start.go:228] waiting for startup goroutines ...
	I0130 22:24:10.441591  681007 start.go:233] waiting for cluster config update ...
	I0130 22:24:10.441601  681007 start.go:242] writing updated cluster config ...
	I0130 22:24:10.441855  681007 ssh_runner.go:195] Run: rm -f paused
	I0130 22:24:10.493274  681007 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:24:10.495414  681007 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:13:40 UTC, ends at Tue 2024-01-30 22:32:42 UTC. --
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.024905602Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653962024870365,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=add188cd-c66d-43bc-8129-16c585cf7e1a name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.025500037Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a4c374d-978d-484f-a46e-f706bb165866 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.025574521Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a4c374d-978d-484f-a46e-f706bb165866 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.025786303Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a4c374d-978d-484f-a46e-f706bb165866 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.071005220Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=6242b45b-ead2-47ce-9c85-0409827efee7 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.071188690Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=6242b45b-ead2-47ce-9c85-0409827efee7 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.072469703Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6290f9d9-9113-43e3-8261-14f192f77cf4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.073202283Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653962073072104,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6290f9d9-9113-43e3-8261-14f192f77cf4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.073865670Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18d97b58-d897-4486-a13a-5c5d65448349 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.073914403Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18d97b58-d897-4486-a13a-5c5d65448349 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.074164515Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=18d97b58-d897-4486-a13a-5c5d65448349 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.114674817Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=b528a6cd-6c10-4d1d-b13a-f2f05c7e5597 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.114731151Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=b528a6cd-6c10-4d1d-b13a-f2f05c7e5597 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.116537721Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=80e46e48-f16c-4142-ba13-63a489fb1171 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.116983722Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653962116968514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=80e46e48-f16c-4142-ba13-63a489fb1171 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.118288237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a6487ecb-879b-492e-9528-c501eda0be93 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.118360240Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a6487ecb-879b-492e-9528-c501eda0be93 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.118511001Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a6487ecb-879b-492e-9528-c501eda0be93 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.155322825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=f3c5c3a5-9df2-487a-a532-4ece13d05e5e name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.155419914Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=f3c5c3a5-9df2-487a-a532-4ece13d05e5e name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.156752794Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6842c55b-aeb8-4a3b-b960-74b85d9fd175 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.157222312Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653962157208240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6842c55b-aeb8-4a3b-b960-74b85d9fd175 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.157967023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cb60f470-d461-49a3-afc5-d67374e93908 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.158049204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cb60f470-d461-49a3-afc5-d67374e93908 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:42 embed-certs-713938 crio[727]: time="2024-01-30 22:32:42.158264134Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cb60f470-d461-49a3-afc5-d67374e93908 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c736f58404008       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   7a63e92d8d981       storage-provisioner
	3a8cdd739a326       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   f9754515a75f0       coredns-5dd5756b68-l6hkm
	40781d148e717       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   cbdbcb8601a88       kube-proxy-f7mgv
	7824c0af9e71a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   7d3cd0f6c8749       etcd-embed-certs-713938
	30becb2331dfc       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   16d036c206b52       kube-scheduler-embed-certs-713938
	57a4b15732d48       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   67d7759b1a42e       kube-controller-manager-embed-certs-713938
	59033ddbd5513       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   6bd300c002049       kube-apiserver-embed-certs-713938
	
	
	==> coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56930 - 1726 "HINFO IN 1646608755236289111.2736373352341829840. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048660713s
	
	
	==> describe nodes <==
	Name:               embed-certs-713938
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-713938
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=embed-certs-713938
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:18:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-713938
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:32:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.213
	  Hostname:    embed-certs-713938
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb3b4d72af244a1cbed79c8534019bb6
	  System UUID:                bb3b4d72-af24-4a1c-bed7-9c8534019bb6
	  Boot ID:                    10a335bc-5ba6-4630-81ca-783257ec95f2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-l6hkm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-embed-certs-713938                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-embed-certs-713938             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-embed-certs-713938    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-f7mgv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-embed-certs-713938             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-vhxng               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 14m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)  kubelet          Node embed-certs-713938 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)  kubelet          Node embed-certs-713938 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)  kubelet          Node embed-certs-713938 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node embed-certs-713938 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node embed-certs-713938 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node embed-certs-713938 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node embed-certs-713938 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node embed-certs-713938 status is now: NodeReady
	  Normal  RegisteredNode           13m                node-controller  Node embed-certs-713938 event: Registered Node embed-certs-713938 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.391101] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238302] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158925] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.514873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000023] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.350319] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.111059] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.148849] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.126783] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.225193] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[Jan30 22:14] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +18.866960] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:18] systemd-fstab-generator[3516]: Ignoring "noauto" for root device
	[  +9.778008] systemd-fstab-generator[3844]: Ignoring "noauto" for root device
	[Jan30 22:19] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] <==
	{"level":"info","ts":"2024-01-30T22:18:42.922502Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"dcbc1fe92b491f0f","local-member-id":"abef9893912f41ab","added-peer-id":"abef9893912f41ab","added-peer-peer-urls":["https://192.168.72.213:2380"]}
	{"level":"info","ts":"2024-01-30T22:18:42.922625Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.213:2380"}
	{"level":"info","ts":"2024-01-30T22:18:42.922651Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.213:2380"}
	{"level":"info","ts":"2024-01-30T22:18:42.921802Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T22:18:43.598492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.598591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.598626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab received MsgPreVoteResp from abef9893912f41ab at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.59867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab received MsgVoteResp from abef9893912f41ab at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became leader at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: abef9893912f41ab elected leader abef9893912f41ab at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.600333Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"abef9893912f41ab","local-member-attributes":"{Name:embed-certs-713938 ClientURLs:[https://192.168.72.213:2379]}","request-path":"/0/members/abef9893912f41ab/attributes","cluster-id":"dcbc1fe92b491f0f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T22:18:43.600535Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:43.6007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:43.601827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:18:43.602015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:43.602054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:43.60217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.213:2379"}
	{"level":"info","ts":"2024-01-30T22:18:43.602274Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.606841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dcbc1fe92b491f0f","local-member-id":"abef9893912f41ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.606988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.607038Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:28:43.87845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-01-30T22:28:43.880961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.099617ms","hash":4222840481}
	{"level":"info","ts":"2024-01-30T22:28:43.881041Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4222840481,"revision":714,"compact-revision":-1}
	
	
	==> kernel <==
	 22:32:42 up 19 min,  0 users,  load average: 0.08, 0.16, 0.15
	Linux embed-certs-713938 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] <==
	I0130 22:28:45.495483       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:28:46.495938       1 handler_proxy.go:93] no RequestInfo found in the context
	W0130 22:28:46.496206       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:28:46.496255       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:28:46.496275       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0130 22:28:46.496321       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:28:46.497418       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:29:45.386457       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:29:46.496918       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:46.496992       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:29:46.497005       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:29:46.498411       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:46.498504       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:29:46.498514       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:30:45.386301       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:31:45.386576       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:31:46.497567       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:46.497626       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:31:46.497644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:31:46.498856       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:46.498961       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:31:46.498969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] <==
	I0130 22:27:01.026992       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:27:30.539514       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:27:31.036650       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:00.548642       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:01.045265       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:30.555359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:31.055991       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:00.561955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:01.065801       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:30.567877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:31.075142       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:00.574493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:01.083920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:30:12.970854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.388µs"
	I0130 22:30:23.969944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.117µs"
	E0130 22:30:30.581650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:31.092379       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:00.588566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:01.103321       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:30.595418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:31.116869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:00.602505       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:01.129712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:30.609223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:31.138929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] <==
	I0130 22:19:04.106893       1 server_others.go:69] "Using iptables proxy"
	I0130 22:19:04.237886       1 node.go:141] Successfully retrieved node IP: 192.168.72.213
	I0130 22:19:04.633360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:04.633537       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:04.672725       1 server_others.go:152] "Using iptables Proxier"
	I0130 22:19:04.724058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:04.725906       1 server.go:846] "Version info" version="v1.28.4"
	I0130 22:19:04.730214       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:04.733887       1 config.go:188] "Starting service config controller"
	I0130 22:19:04.734245       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:04.734350       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:04.734403       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:04.739695       1 config.go:315] "Starting node config controller"
	I0130 22:19:04.739735       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:04.839819       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:04.839900       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:04.839921       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] <==
	W0130 22:18:45.531452       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:45.531462       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:46.378499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:18:46.378570       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:18:46.444574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 22:18:46.444740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 22:18:46.522417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:18:46.522467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:18:46.524493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:46.524512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:46.547956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:18:46.548038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 22:18:46.605644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:18:46.605745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 22:18:46.684988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:46.685052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:46.693398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:18:46.693493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:18:46.695828       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:46.695885       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:46.738002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:18:46.738054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:18:46.785909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:18:46.785960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0130 22:18:48.713197       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:13:40 UTC, ends at Tue 2024-01-30 22:32:42 UTC. --
	Jan 30 22:29:48 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:29:58 embed-certs-713938 kubelet[3851]: E0130 22:29:58.971216    3851 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 22:29:58 embed-certs-713938 kubelet[3851]: E0130 22:29:58.971281    3851 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 22:29:58 embed-certs-713938 kubelet[3851]: E0130 22:29:58.971596    3851 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wdr8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pr
obeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:F
ile,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-vhxng_kube-system(87663986-4226-44fc-9eea-43dd94a12fae): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:29:58 embed-certs-713938 kubelet[3851]: E0130 22:29:58.971654    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:30:12 embed-certs-713938 kubelet[3851]: E0130 22:30:12.954194    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:30:23 embed-certs-713938 kubelet[3851]: E0130 22:30:23.953576    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:30:35 embed-certs-713938 kubelet[3851]: E0130 22:30:35.953297    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:30:48 embed-certs-713938 kubelet[3851]: E0130 22:30:48.981997    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:30:48 embed-certs-713938 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:30:48 embed-certs-713938 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:30:48 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:30:50 embed-certs-713938 kubelet[3851]: E0130 22:30:50.957909    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:04 embed-certs-713938 kubelet[3851]: E0130 22:31:04.954402    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:19 embed-certs-713938 kubelet[3851]: E0130 22:31:19.953746    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:33 embed-certs-713938 kubelet[3851]: E0130 22:31:33.953234    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:48 embed-certs-713938 kubelet[3851]: E0130 22:31:48.953957    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]: E0130 22:31:49.087349    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:02 embed-certs-713938 kubelet[3851]: E0130 22:32:02.954235    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:16 embed-certs-713938 kubelet[3851]: E0130 22:32:16.953304    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:27 embed-certs-713938 kubelet[3851]: E0130 22:32:27.953743    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:38 embed-certs-713938 kubelet[3851]: E0130 22:32:38.955309    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	
	
	==> storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] <==
	I0130 22:19:05.354010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:05.369946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:05.370692       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:05.386065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:05.387534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222!
	I0130 22:19:05.386790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99103890-2a9b-434b-b83a-f09cc284a485", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222 became leader
	I0130 22:19:05.488521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-713938 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vhxng
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng: exit status 1 (84.943006ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vhxng" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.36s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.33s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:33:11.112823943 +0000 UTC m=+5562.513766076
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-850803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-850803 logs -n 25: (1.289681886s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:32 UTC |
	| start   | -p newest-cni-507807 --memory=2200 --alsologtostderr   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:32:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:32:47.836778  686214 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:32:47.836996  686214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:32:47.837009  686214 out.go:309] Setting ErrFile to fd 2...
	I0130 22:32:47.837014  686214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:32:47.837255  686214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:32:47.837978  686214 out.go:303] Setting JSON to false
	I0130 22:32:47.839206  686214 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11720,"bootTime":1706642248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:32:47.839262  686214 start.go:138] virtualization: kvm guest
	I0130 22:32:47.842582  686214 out.go:177] * [newest-cni-507807] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:32:47.844294  686214 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:32:47.844279  686214 notify.go:220] Checking for updates...
	I0130 22:32:47.846098  686214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:32:47.847879  686214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:32:47.850175  686214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:47.851625  686214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:32:47.852933  686214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:32:47.854620  686214 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:32:47.854735  686214 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:32:47.854840  686214 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:32:47.854971  686214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:32:47.891638  686214 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 22:32:47.893377  686214 start.go:298] selected driver: kvm2
	I0130 22:32:47.893395  686214 start.go:902] validating driver "kvm2" against <nil>
	I0130 22:32:47.893405  686214 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:32:47.894214  686214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:32:47.894304  686214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:32:47.910530  686214 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:32:47.910568  686214 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0130 22:32:47.910587  686214 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0130 22:32:47.910780  686214 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0130 22:32:47.910856  686214 cni.go:84] Creating CNI manager for ""
	I0130 22:32:47.910875  686214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:32:47.910914  686214 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 22:32:47.910926  686214 start_flags.go:321] config:
	{Name:newest-cni-507807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:32:47.911139  686214 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:32:47.913265  686214 out.go:177] * Starting control plane node newest-cni-507807 in cluster newest-cni-507807
	I0130 22:32:47.914528  686214 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:32:47.914572  686214 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 22:32:47.914584  686214 cache.go:56] Caching tarball of preloaded images
	I0130 22:32:47.914668  686214 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:32:47.914684  686214 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 22:32:47.914799  686214 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json ...
	I0130 22:32:47.914830  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json: {Name:mk81570407f5d4996058025017b1e2b2861438ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:32:47.915036  686214 start.go:365] acquiring machines lock for newest-cni-507807: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:32:47.915083  686214 start.go:369] acquired machines lock for "newest-cni-507807" in 25.573µs
	I0130 22:32:47.915101  686214 start.go:93] Provisioning new machine with config: &{Name:newest-cni-507807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:32:47.915208  686214 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 22:32:47.916952  686214 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0130 22:32:47.917110  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:32:47.917147  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:32:47.930978  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0130 22:32:47.931454  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:32:47.932064  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:32:47.932094  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:32:47.932411  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:32:47.932596  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetMachineName
	I0130 22:32:47.932769  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:32:47.932937  686214 start.go:159] libmachine.API.Create for "newest-cni-507807" (driver="kvm2")
	I0130 22:32:47.932968  686214 client.go:168] LocalClient.Create starting
	I0130 22:32:47.933032  686214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem
	I0130 22:32:47.933070  686214 main.go:141] libmachine: Decoding PEM data...
	I0130 22:32:47.933093  686214 main.go:141] libmachine: Parsing certificate...
	I0130 22:32:47.933160  686214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem
	I0130 22:32:47.933197  686214 main.go:141] libmachine: Decoding PEM data...
	I0130 22:32:47.933219  686214 main.go:141] libmachine: Parsing certificate...
	I0130 22:32:47.933245  686214 main.go:141] libmachine: Running pre-create checks...
	I0130 22:32:47.933260  686214 main.go:141] libmachine: (newest-cni-507807) Calling .PreCreateCheck
	I0130 22:32:47.933668  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetConfigRaw
	I0130 22:32:47.934088  686214 main.go:141] libmachine: Creating machine...
	I0130 22:32:47.934101  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Create
	I0130 22:32:47.934253  686214 main.go:141] libmachine: (newest-cni-507807) Creating KVM machine...
	I0130 22:32:47.935466  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found existing default KVM network
	I0130 22:32:47.937201  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:47.936990  686237 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f830}
	I0130 22:32:47.942373  686214 main.go:141] libmachine: (newest-cni-507807) DBG | trying to create private KVM network mk-newest-cni-507807 192.168.39.0/24...
	I0130 22:32:48.015573  686214 main.go:141] libmachine: (newest-cni-507807) DBG | private KVM network mk-newest-cni-507807 192.168.39.0/24 created
	I0130 22:32:48.015616  686214 main.go:141] libmachine: (newest-cni-507807) Setting up store path in /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 ...
	I0130 22:32:48.015639  686214 main.go:141] libmachine: (newest-cni-507807) Building disk image from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 22:32:48.015701  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.015644  686237 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:48.015869  686214 main.go:141] libmachine: (newest-cni-507807) Downloading /home/jenkins/minikube-integration/18014-640473/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 22:32:48.266855  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.266710  686237 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa...
	I0130 22:32:48.333982  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.333861  686237 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/newest-cni-507807.rawdisk...
	I0130 22:32:48.334014  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Writing magic tar header
	I0130 22:32:48.334035  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Writing SSH key tar header
	I0130 22:32:48.334195  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.334078  686237 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 ...
	I0130 22:32:48.334231  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807
	I0130 22:32:48.334275  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 (perms=drwx------)
	I0130 22:32:48.334307  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines (perms=drwxr-xr-x)
	I0130 22:32:48.334321  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines
	I0130 22:32:48.334339  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube (perms=drwxr-xr-x)
	I0130 22:32:48.334354  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473 (perms=drwxrwxr-x)
	I0130 22:32:48.334365  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:48.334377  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473
	I0130 22:32:48.334387  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 22:32:48.334403  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 22:32:48.334420  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 22:32:48.334446  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins
	I0130 22:32:48.334458  686214 main.go:141] libmachine: (newest-cni-507807) Creating domain...
	I0130 22:32:48.334475  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home
	I0130 22:32:48.334488  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Skipping /home - not owner
	I0130 22:32:48.335816  686214 main.go:141] libmachine: (newest-cni-507807) define libvirt domain using xml: 
	I0130 22:32:48.335843  686214 main.go:141] libmachine: (newest-cni-507807) <domain type='kvm'>
	I0130 22:32:48.335854  686214 main.go:141] libmachine: (newest-cni-507807)   <name>newest-cni-507807</name>
	I0130 22:32:48.335885  686214 main.go:141] libmachine: (newest-cni-507807)   <memory unit='MiB'>2200</memory>
	I0130 22:32:48.335901  686214 main.go:141] libmachine: (newest-cni-507807)   <vcpu>2</vcpu>
	I0130 22:32:48.335909  686214 main.go:141] libmachine: (newest-cni-507807)   <features>
	I0130 22:32:48.335919  686214 main.go:141] libmachine: (newest-cni-507807)     <acpi/>
	I0130 22:32:48.335931  686214 main.go:141] libmachine: (newest-cni-507807)     <apic/>
	I0130 22:32:48.335940  686214 main.go:141] libmachine: (newest-cni-507807)     <pae/>
	I0130 22:32:48.335953  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.335963  686214 main.go:141] libmachine: (newest-cni-507807)   </features>
	I0130 22:32:48.335974  686214 main.go:141] libmachine: (newest-cni-507807)   <cpu mode='host-passthrough'>
	I0130 22:32:48.335985  686214 main.go:141] libmachine: (newest-cni-507807)   
	I0130 22:32:48.335993  686214 main.go:141] libmachine: (newest-cni-507807)   </cpu>
	I0130 22:32:48.336004  686214 main.go:141] libmachine: (newest-cni-507807)   <os>
	I0130 22:32:48.336018  686214 main.go:141] libmachine: (newest-cni-507807)     <type>hvm</type>
	I0130 22:32:48.336032  686214 main.go:141] libmachine: (newest-cni-507807)     <boot dev='cdrom'/>
	I0130 22:32:48.336044  686214 main.go:141] libmachine: (newest-cni-507807)     <boot dev='hd'/>
	I0130 22:32:48.336057  686214 main.go:141] libmachine: (newest-cni-507807)     <bootmenu enable='no'/>
	I0130 22:32:48.336066  686214 main.go:141] libmachine: (newest-cni-507807)   </os>
	I0130 22:32:48.336074  686214 main.go:141] libmachine: (newest-cni-507807)   <devices>
	I0130 22:32:48.336096  686214 main.go:141] libmachine: (newest-cni-507807)     <disk type='file' device='cdrom'>
	I0130 22:32:48.336116  686214 main.go:141] libmachine: (newest-cni-507807)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/boot2docker.iso'/>
	I0130 22:32:48.336129  686214 main.go:141] libmachine: (newest-cni-507807)       <target dev='hdc' bus='scsi'/>
	I0130 22:32:48.336142  686214 main.go:141] libmachine: (newest-cni-507807)       <readonly/>
	I0130 22:32:48.336151  686214 main.go:141] libmachine: (newest-cni-507807)     </disk>
	I0130 22:32:48.336164  686214 main.go:141] libmachine: (newest-cni-507807)     <disk type='file' device='disk'>
	I0130 22:32:48.336179  686214 main.go:141] libmachine: (newest-cni-507807)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 22:32:48.336197  686214 main.go:141] libmachine: (newest-cni-507807)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/newest-cni-507807.rawdisk'/>
	I0130 22:32:48.336210  686214 main.go:141] libmachine: (newest-cni-507807)       <target dev='hda' bus='virtio'/>
	I0130 22:32:48.336222  686214 main.go:141] libmachine: (newest-cni-507807)     </disk>
	I0130 22:32:48.336234  686214 main.go:141] libmachine: (newest-cni-507807)     <interface type='network'>
	I0130 22:32:48.336254  686214 main.go:141] libmachine: (newest-cni-507807)       <source network='mk-newest-cni-507807'/>
	I0130 22:32:48.336266  686214 main.go:141] libmachine: (newest-cni-507807)       <model type='virtio'/>
	I0130 22:32:48.336277  686214 main.go:141] libmachine: (newest-cni-507807)     </interface>
	I0130 22:32:48.336289  686214 main.go:141] libmachine: (newest-cni-507807)     <interface type='network'>
	I0130 22:32:48.336303  686214 main.go:141] libmachine: (newest-cni-507807)       <source network='default'/>
	I0130 22:32:48.336315  686214 main.go:141] libmachine: (newest-cni-507807)       <model type='virtio'/>
	I0130 22:32:48.336325  686214 main.go:141] libmachine: (newest-cni-507807)     </interface>
	I0130 22:32:48.336342  686214 main.go:141] libmachine: (newest-cni-507807)     <serial type='pty'>
	I0130 22:32:48.336354  686214 main.go:141] libmachine: (newest-cni-507807)       <target port='0'/>
	I0130 22:32:48.336367  686214 main.go:141] libmachine: (newest-cni-507807)     </serial>
	I0130 22:32:48.336378  686214 main.go:141] libmachine: (newest-cni-507807)     <console type='pty'>
	I0130 22:32:48.336387  686214 main.go:141] libmachine: (newest-cni-507807)       <target type='serial' port='0'/>
	I0130 22:32:48.336397  686214 main.go:141] libmachine: (newest-cni-507807)     </console>
	I0130 22:32:48.336410  686214 main.go:141] libmachine: (newest-cni-507807)     <rng model='virtio'>
	I0130 22:32:48.336425  686214 main.go:141] libmachine: (newest-cni-507807)       <backend model='random'>/dev/random</backend>
	I0130 22:32:48.336436  686214 main.go:141] libmachine: (newest-cni-507807)     </rng>
	I0130 22:32:48.336449  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.336460  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.336473  686214 main.go:141] libmachine: (newest-cni-507807)   </devices>
	I0130 22:32:48.336485  686214 main.go:141] libmachine: (newest-cni-507807) </domain>
	I0130 22:32:48.336497  686214 main.go:141] libmachine: (newest-cni-507807) 
	I0130 22:32:48.341375  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:82:2f:f1 in network default
	I0130 22:32:48.342016  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring networks are active...
	I0130 22:32:48.342046  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:48.342816  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring network default is active
	I0130 22:32:48.343202  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring network mk-newest-cni-507807 is active
	I0130 22:32:48.343874  686214 main.go:141] libmachine: (newest-cni-507807) Getting domain xml...
	I0130 22:32:48.344671  686214 main.go:141] libmachine: (newest-cni-507807) Creating domain...
	I0130 22:32:49.590569  686214 main.go:141] libmachine: (newest-cni-507807) Waiting to get IP...
	I0130 22:32:49.591323  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:49.591918  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:49.591994  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:49.591892  686237 retry.go:31] will retry after 229.507483ms: waiting for machine to come up
	I0130 22:32:49.823510  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:49.824065  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:49.824098  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:49.824001  686237 retry.go:31] will retry after 334.851564ms: waiting for machine to come up
	I0130 22:32:50.160597  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.161061  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.161098  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.161009  686237 retry.go:31] will retry after 436.519923ms: waiting for machine to come up
	I0130 22:32:50.599599  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.600200  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.600239  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.600111  686237 retry.go:31] will retry after 381.704989ms: waiting for machine to come up
	I0130 22:32:50.983895  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.984572  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.984608  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.984495  686237 retry.go:31] will retry after 501.7142ms: waiting for machine to come up
	I0130 22:32:51.488171  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:51.488619  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:51.488646  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:51.488586  686237 retry.go:31] will retry after 703.569138ms: waiting for machine to come up
	I0130 22:32:52.193577  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:52.194510  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:52.194534  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:52.194453  686237 retry.go:31] will retry after 885.583889ms: waiting for machine to come up
	I0130 22:32:53.082178  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:53.082636  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:53.082668  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:53.082582  686237 retry.go:31] will retry after 1.389780595s: waiting for machine to come up
	I0130 22:32:54.474383  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:54.474903  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:54.474939  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:54.474839  686237 retry.go:31] will retry after 1.584665962s: waiting for machine to come up
	I0130 22:32:56.061266  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:56.061758  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:56.061783  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:56.061710  686237 retry.go:31] will retry after 2.068215782s: waiting for machine to come up
	I0130 22:32:58.132113  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:58.132611  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:58.132636  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:58.132548  686237 retry.go:31] will retry after 2.48238431s: waiting for machine to come up
	I0130 22:33:00.618332  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:00.618753  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:00.618782  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:00.618701  686237 retry.go:31] will retry after 2.512763919s: waiting for machine to come up
	I0130 22:33:03.133026  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:03.133425  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:03.133454  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:03.133357  686237 retry.go:31] will retry after 4.117036665s: waiting for machine to come up
	I0130 22:33:07.254595  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:07.255049  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:07.255077  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:07.254995  686237 retry.go:31] will retry after 3.671927151s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:14:00 UTC, ends at Tue 2024-01-30 22:33:12 UTC. --
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.846528470Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653991846513078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9b5e0168-91a0-48e2-aa9b-0116da03fcf0 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.847091796Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f9aea8c-71be-4b00-afff-3b46da49f041 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.847163381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f9aea8c-71be-4b00-afff-3b46da49f041 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.847329510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f9aea8c-71be-4b00-afff-3b46da49f041 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.889668135Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=1f6a1443-fc86-4a72-85e9-4f28f0da2057 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.889755756Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=1f6a1443-fc86-4a72-85e9-4f28f0da2057 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.890702442Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=a9f70acf-356b-449b-939f-13f44ebc5e46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.891201062Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653991891187816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a9f70acf-356b-449b-939f-13f44ebc5e46 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.891959146Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=102bf85e-9b97-4ad2-a922-a8616b64bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.892030324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=102bf85e-9b97-4ad2-a922-a8616b64bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.892192223Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=102bf85e-9b97-4ad2-a922-a8616b64bb24 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.936501578Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=9947e923-e7a2-43e1-94c7-23b572830e9c name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.936610218Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=9947e923-e7a2-43e1-94c7-23b572830e9c name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.938757661Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=6cc9fc02-eb2d-4983-a0aa-59cbe5503acd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.939418849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653991939397768,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=6cc9fc02-eb2d-4983-a0aa-59cbe5503acd name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.940098665Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3a5eba9b-d57e-495e-8beb-3a14d6697692 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.940253004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3a5eba9b-d57e-495e-8beb-3a14d6697692 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.940465444Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3a5eba9b-d57e-495e-8beb-3a14d6697692 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.980564166Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0fd19154-505d-42ca-aed8-bbc257e207c0 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.980670906Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0fd19154-505d-42ca-aed8-bbc257e207c0 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.982172276Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=c29a1749-cb3f-4940-b7c6-faf414c406f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.982622449Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653991982604131,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=c29a1749-cb3f-4940-b7c6-faf414c406f1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.983395971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8bbcca3d-52ec-4122-a6ab-be8424a8dfdf name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.983468186Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8bbcca3d-52ec-4122-a6ab-be8424a8dfdf name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:11 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:33:11.983664314Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8bbcca3d-52ec-4122-a6ab-be8424a8dfdf name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43da5b55fb482       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   13 minutes ago      Running             storage-provisioner       0                   932762720b948       storage-provisioner
	39c79e5bf1f78       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   13 minutes ago      Running             kube-proxy                0                   04bd582806752       kube-proxy-9b97q
	226d3c6d1fe8c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   13 minutes ago      Running             coredns                   0                   94b66e79a14ff       coredns-5dd5756b68-z27l8
	c65c8f7f27cef       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   14 minutes ago      Running             kube-scheduler            2                   7a83056fd8c58       kube-scheduler-default-k8s-diff-port-850803
	1ae8e1a1886b9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   14 minutes ago      Running             etcd                      2                   8af4e703fb33d       etcd-default-k8s-diff-port-850803
	a6dda49131d42       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   14 minutes ago      Running             kube-apiserver            2                   c73314d7536a8       kube-apiserver-default-k8s-diff-port-850803
	bdf2eff0e83f3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   14 minutes ago      Running             kube-controller-manager   2                   e6255630721a2       kube-controller-manager-default-k8s-diff-port-850803
	
	
	==> coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-850803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-850803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=default-k8s-diff-port-850803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850803
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:33:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.254
	  Hostname:    default-k8s-diff-port-850803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5243e2b5c284dc1ad35b1a6be575851
	  System UUID:                c5243e2b-5c28-4dc1-ad35-b1a6be575851
	  Boot ID:                    ceabb56f-f95f-4d19-af00-af634aeedb28
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-z27l8                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-default-k8s-diff-port-850803                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-default-k8s-diff-port-850803             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850803    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-9b97q                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-default-k8s-diff-port-850803             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-nkcv4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeReady
	  Normal  RegisteredNode           13m   node-controller  Node default-k8s-diff-port-850803 event: Registered Node default-k8s-diff-port-850803 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081397] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.546455] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.305559] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[Jan30 22:14] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.506124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.865067] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.100132] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.145136] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.128300] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.259499] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.877134] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +21.308001] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:19] systemd-fstab-generator[3503]: Ignoring "noauto" for root device
	[  +9.278029] systemd-fstab-generator[3832]: Ignoring "noauto" for root device
	[ +15.014166] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] <==
	{"level":"info","ts":"2024-01-30T22:19:12.690942Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.254:2380"}
	{"level":"info","ts":"2024-01-30T22:19:12.691098Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.254:2380"}
	{"level":"info","ts":"2024-01-30T22:19:12.696466Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"c47571729f78ba63","initial-advertise-peer-urls":["https://192.168.50.254:2380"],"listen-peer-urls":["https://192.168.50.254:2380"],"advertise-client-urls":["https://192.168.50.254:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.254:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T22:19:12.696583Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T22:19:12.828983Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T22:19:12.829082Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T22:19:12.829121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 received MsgPreVoteResp from c47571729f78ba63 at term 1"}
	{"level":"info","ts":"2024-01-30T22:19:12.829175Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T22:19:12.829207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 received MsgVoteResp from c47571729f78ba63 at term 2"}
	{"level":"info","ts":"2024-01-30T22:19:12.829242Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c47571729f78ba63 became leader at term 2"}
	{"level":"info","ts":"2024-01-30T22:19:12.829276Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c47571729f78ba63 elected leader c47571729f78ba63 at term 2"}
	{"level":"info","ts":"2024-01-30T22:19:12.834117Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.838254Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"c47571729f78ba63","local-member-attributes":"{Name:default-k8s-diff-port-850803 ClientURLs:[https://192.168.50.254:2379]}","request-path":"/0/members/c47571729f78ba63/attributes","cluster-id":"a0c94ab6025ee16","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T22:19:12.838312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:19:12.839546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:19:12.84118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:19:12.843755Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a0c94ab6025ee16","local-member-id":"c47571729f78ba63","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.844023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.844094Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.853974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:19:12.854103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:19:12.874481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.254:2379"}
	{"level":"info","ts":"2024-01-30T22:29:13.279461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-01-30T22:29:13.282984Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.660988ms","hash":529779308}
	{"level":"info","ts":"2024-01-30T22:29:13.283075Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":529779308,"revision":677,"compact-revision":-1}
	
	
	==> kernel <==
	 22:33:12 up 19 min,  0 users,  load average: 0.05, 0.12, 0.16
	Linux default-k8s-diff-port-850803 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] <==
	I0130 22:29:14.735499       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:29:15.735629       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:15.735726       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:29:15.735752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:29:15.735662       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:15.735843       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:29:15.736842       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:30:14.599732       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:30:15.736861       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:30:15.737041       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:30:15.737098       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:30:15.737057       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:30:15.737250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:30:15.738525       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:31:14.600133       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:32:14.599990       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:32:15.737818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:32:15.737947       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:32:15.737958       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:32:15.739172       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:32:15.739303       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:32:15.739348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] <==
	I0130 22:27:31.584323       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:01.137263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:01.603527       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:31.145200       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:31.613387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:01.150758       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:01.623566       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:31.157297       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:31.632760       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:01.162768       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:01.648277       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:31.170321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:31.661115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:30:34.406961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.77µs"
	I0130 22:30:45.403739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.783µs"
	E0130 22:31:01.176151       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:01.670308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:31.182265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:31.680717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:01.187313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:01.688528       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:31.195547       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:31.700254       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:01.202117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:01.710831       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] <==
	I0130 22:19:35.248848       1 server_others.go:69] "Using iptables proxy"
	I0130 22:19:35.265657       1 node.go:141] Successfully retrieved node IP: 192.168.50.254
	I0130 22:19:35.308666       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:35.308728       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:35.312329       1 server_others.go:152] "Using iptables Proxier"
	I0130 22:19:35.313106       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:35.314252       1 server.go:846] "Version info" version="v1.28.4"
	I0130 22:19:35.314356       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:35.316111       1 config.go:188] "Starting service config controller"
	I0130 22:19:35.316761       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:35.316980       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:35.317145       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:35.319548       1 config.go:315] "Starting node config controller"
	I0130 22:19:35.319686       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:35.417519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:35.417566       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:35.419874       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] <==
	W0130 22:19:14.760610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:19:14.760618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:19:14.762296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:19:14.762343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:19:15.600160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:15.600263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:15.621628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:19:15.621700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:19:15.650773       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:19:15.650826       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:19:15.699379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:19:15.699451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:19:15.772354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:19:15.772406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:19:15.791274       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:19:15.791325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 22:19:15.802102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:15.802424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:15.840266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:19:15.840538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:19:16.018790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:16.018843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:16.034226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:19:16.034275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0130 22:19:18.351464       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:14:00 UTC, ends at Tue 2024-01-30 22:33:12 UTC. --
	Jan 30 22:30:20 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:20.437108    3839 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 22:30:20 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:20.437184    3839 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Jan 30 22:30:20 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:20.437458    3839 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2g9k8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-nkcv4_kube-system(8ff91827-4613-4a66-963b-9bec1c1493bc): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:30:20 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:20.437501    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:30:34 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:34.380687    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:30:45 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:30:45.378245    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:31:00 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:00.378469    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:31:11 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:11.377999    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:31:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:18.495973    3839 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:31:18 default-k8s-diff-port-850803 kubelet[3839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:31:18 default-k8s-diff-port-850803 kubelet[3839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:31:18 default-k8s-diff-port-850803 kubelet[3839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:31:26 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:26.378157    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:31:41 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:41.377767    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:31:56 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:56.378105    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:09 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:09.377697    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:18.496445    3839 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:21 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:21.378205    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:32 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:32.381282    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:47 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:47.377672    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:59 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:59.378545    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:33:12 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:12.378590    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	
	
	==> storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] <==
	I0130 22:19:35.126322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:35.139705       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:35.140049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:35.152767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:35.153185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70!
	I0130 22:19:35.156644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6af21fb0-2e65-4b5d-80c3-01a42f661b1d", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70 became leader
	I0130 22:19:35.253993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nkcv4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4: exit status 1 (71.598934ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nkcv4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.33s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (513.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 22:24:25.157267  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:24:32.717214  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:25:55.766417  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:26:52.587859  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-912992 -n old-k8s-version-912992
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:32:43.894023766 +0000 UTC m=+5535.294965883
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-912992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-912992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.484µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-912992 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-912992 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-912992 logs -n 25: (1.680698682s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:04 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|         | --alsologtostderr                                      |                              |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                     |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:09:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:09:08.900187  681007 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:09:08.900447  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900456  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:09:08.900460  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:09:08.900635  681007 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:09:08.901158  681007 out.go:303] Setting JSON to false
	I0130 22:09:08.902121  681007 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":10301,"bootTime":1706642248,"procs":210,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:09:08.902185  681007 start.go:138] virtualization: kvm guest
	I0130 22:09:08.904443  681007 out.go:177] * [default-k8s-diff-port-850803] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:09:08.905904  681007 notify.go:220] Checking for updates...
	I0130 22:09:08.905916  681007 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:09:08.907548  681007 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:09:08.908959  681007 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:09:08.910401  681007 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:09:08.911766  681007 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:09:08.913044  681007 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:09:08.914682  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:09:08.915157  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.915201  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.929650  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0130 22:09:08.930098  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.930701  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.930721  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.931048  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.931239  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.931458  681007 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:09:08.931745  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:09:08.931778  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:09:08.946395  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40441
	I0130 22:09:08.946754  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:09:08.947305  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:09:08.947328  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:09:08.947686  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:09:08.947865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:09:08.982088  681007 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 22:09:08.983300  681007 start.go:298] selected driver: kvm2
	I0130 22:09:08.983312  681007 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.983408  681007 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:09:08.984088  681007 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:08.984161  681007 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:09:08.997808  681007 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:09:08.998205  681007 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:09:08.998285  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:09:08.998305  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:09:08.998323  681007 start_flags.go:321] config:
	{Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-85080
3 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:09:08.998554  681007 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:09:09.000506  681007 out.go:177] * Starting control plane node default-k8s-diff-port-850803 in cluster default-k8s-diff-port-850803
	I0130 22:09:09.417791  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:09.001801  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:09:09.001832  681007 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:09:09.001844  681007 cache.go:56] Caching tarball of preloaded images
	I0130 22:09:09.001930  681007 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:09:09.001942  681007 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:09:09.002074  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:09:09.002279  681007 start.go:365] acquiring machines lock for default-k8s-diff-port-850803: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:09:15.497723  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:18.569709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:24.649709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:27.721682  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:33.801746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:36.873758  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:42.953715  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:46.025774  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:52.105752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:09:55.177803  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:01.257740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:04.329775  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:10.409748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:13.481709  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:19.561742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:22.634236  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:28.713807  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:31.785746  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:37.865734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:40.937754  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:47.017740  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:50.089744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:56.169767  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:10:59.241735  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:05.321760  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:08.393763  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:14.473745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:17.545673  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:23.625780  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:26.697711  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:32.777688  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:35.849700  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:41.929752  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:45.001744  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:51.081733  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:11:54.153686  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:00.233749  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:03.305724  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:09.385748  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:12.457710  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:18.537805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:21.609734  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:27.689765  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:30.761718  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:36.841762  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:39.913805  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:45.993742  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:49.065753  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:55.145745  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:12:58.217703  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.302231  680786 start.go:369] acquired machines lock for "no-preload-023824" in 4m22.656152529s
	I0130 22:13:07.302304  680786 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:07.302314  680786 fix.go:54] fixHost starting: 
	I0130 22:13:07.302790  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:07.302835  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:07.317987  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34743
	I0130 22:13:07.318451  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:07.318943  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:13:07.318965  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:07.319340  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:07.319538  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:07.319679  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:13:07.321151  680786 fix.go:102] recreateIfNeeded on no-preload-023824: state=Stopped err=<nil>
	I0130 22:13:07.321173  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	W0130 22:13:07.321343  680786 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:07.322929  680786 out.go:177] * Restarting existing kvm2 VM for "no-preload-023824" ...
	I0130 22:13:04.297739  680506 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.84:22: connect: no route to host
	I0130 22:13:07.299984  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:07.300024  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:13:07.302029  680506 machine.go:91] provisioned docker machine in 4m44.646018806s
	I0130 22:13:07.302108  680506 fix.go:56] fixHost completed within 4m44.666279152s
	I0130 22:13:07.302116  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 4m44.666320503s
	W0130 22:13:07.302153  680506 start.go:694] error starting host: provision: host is not running
	W0130 22:13:07.302282  680506 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0130 22:13:07.302293  680506 start.go:709] Will try again in 5 seconds ...
	I0130 22:13:07.324101  680786 main.go:141] libmachine: (no-preload-023824) Calling .Start
	I0130 22:13:07.324252  680786 main.go:141] libmachine: (no-preload-023824) Ensuring networks are active...
	I0130 22:13:07.325034  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network default is active
	I0130 22:13:07.325415  680786 main.go:141] libmachine: (no-preload-023824) Ensuring network mk-no-preload-023824 is active
	I0130 22:13:07.325804  680786 main.go:141] libmachine: (no-preload-023824) Getting domain xml...
	I0130 22:13:07.326696  680786 main.go:141] libmachine: (no-preload-023824) Creating domain...
	I0130 22:13:08.499216  680786 main.go:141] libmachine: (no-preload-023824) Waiting to get IP...
	I0130 22:13:08.500483  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.500933  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.501067  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.500931  681630 retry.go:31] will retry after 268.447444ms: waiting for machine to come up
	I0130 22:13:08.771705  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:08.772073  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:08.772101  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:08.772010  681630 retry.go:31] will retry after 235.233391ms: waiting for machine to come up
	I0130 22:13:09.008402  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.008795  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.008826  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.008757  681630 retry.go:31] will retry after 433.981592ms: waiting for machine to come up
	I0130 22:13:09.444576  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.444963  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.445001  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.444900  681630 retry.go:31] will retry after 518.108537ms: waiting for machine to come up
	I0130 22:13:12.306584  680506 start.go:365] acquiring machines lock for old-k8s-version-912992: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:13:09.964605  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:09.964956  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:09.964985  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:09.964919  681630 retry.go:31] will retry after 497.667085ms: waiting for machine to come up
	I0130 22:13:10.464522  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:10.464897  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:10.464930  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:10.464853  681630 retry.go:31] will retry after 918.136538ms: waiting for machine to come up
	I0130 22:13:11.384191  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:11.384665  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:11.384719  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:11.384630  681630 retry.go:31] will retry after 942.595537ms: waiting for machine to come up
	I0130 22:13:12.328976  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:12.329412  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:12.329438  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:12.329365  681630 retry.go:31] will retry after 1.080632129s: waiting for machine to come up
	I0130 22:13:13.411494  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:13.411880  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:13.411905  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:13.411830  681630 retry.go:31] will retry after 1.70851135s: waiting for machine to come up
	I0130 22:13:15.122731  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:15.123212  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:15.123244  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:15.123164  681630 retry.go:31] will retry after 1.890143577s: waiting for machine to come up
	I0130 22:13:17.016347  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:17.016789  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:17.016812  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:17.016745  681630 retry.go:31] will retry after 2.710901352s: waiting for machine to come up
	I0130 22:13:19.731235  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:19.731687  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:19.731717  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:19.731628  681630 retry.go:31] will retry after 3.494667363s: waiting for machine to come up
	I0130 22:13:23.227477  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:23.227894  680786 main.go:141] libmachine: (no-preload-023824) DBG | unable to find current IP address of domain no-preload-023824 in network mk-no-preload-023824
	I0130 22:13:23.227927  680786 main.go:141] libmachine: (no-preload-023824) DBG | I0130 22:13:23.227844  681630 retry.go:31] will retry after 4.45900259s: waiting for machine to come up
	I0130 22:13:28.902379  680821 start.go:369] acquired machines lock for "embed-certs-713938" in 4m43.197815022s
	I0130 22:13:28.902454  680821 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:28.902466  680821 fix.go:54] fixHost starting: 
	I0130 22:13:28.902824  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:28.902863  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:28.922121  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35575
	I0130 22:13:28.922554  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:28.923019  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:13:28.923040  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:28.923378  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:28.923587  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:28.923730  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:13:28.925000  680821 fix.go:102] recreateIfNeeded on embed-certs-713938: state=Stopped err=<nil>
	I0130 22:13:28.925042  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	W0130 22:13:28.925225  680821 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:28.927620  680821 out.go:177] * Restarting existing kvm2 VM for "embed-certs-713938" ...
	I0130 22:13:27.688611  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689047  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has current primary IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.689078  680786 main.go:141] libmachine: (no-preload-023824) Found IP for machine: 192.168.61.232
	I0130 22:13:27.689095  680786 main.go:141] libmachine: (no-preload-023824) Reserving static IP address...
	I0130 22:13:27.689540  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.689585  680786 main.go:141] libmachine: (no-preload-023824) DBG | skip adding static IP to network mk-no-preload-023824 - found existing host DHCP lease matching {name: "no-preload-023824", mac: "52:54:00:d1:23:54", ip: "192.168.61.232"}
	I0130 22:13:27.689610  680786 main.go:141] libmachine: (no-preload-023824) Reserved static IP address: 192.168.61.232
	I0130 22:13:27.689630  680786 main.go:141] libmachine: (no-preload-023824) Waiting for SSH to be available...
	I0130 22:13:27.689645  680786 main.go:141] libmachine: (no-preload-023824) DBG | Getting to WaitForSSH function...
	I0130 22:13:27.691725  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692037  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.692060  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.692196  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH client type: external
	I0130 22:13:27.692236  680786 main.go:141] libmachine: (no-preload-023824) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa (-rw-------)
	I0130 22:13:27.692288  680786 main.go:141] libmachine: (no-preload-023824) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.232 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:27.692305  680786 main.go:141] libmachine: (no-preload-023824) DBG | About to run SSH command:
	I0130 22:13:27.692318  680786 main.go:141] libmachine: (no-preload-023824) DBG | exit 0
	I0130 22:13:27.784900  680786 main.go:141] libmachine: (no-preload-023824) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:27.785232  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetConfigRaw
	I0130 22:13:27.786142  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:27.788581  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.788961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.788997  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.789280  680786 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/config.json ...
	I0130 22:13:27.789457  680786 machine.go:88] provisioning docker machine ...
	I0130 22:13:27.789489  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:27.789691  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.789857  680786 buildroot.go:166] provisioning hostname "no-preload-023824"
	I0130 22:13:27.789879  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:27.790013  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.792055  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792370  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.792405  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.792478  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.792643  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.792790  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.793010  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.793205  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.793814  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.793842  680786 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-023824 && echo "no-preload-023824" | sudo tee /etc/hostname
	I0130 22:13:27.931141  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-023824
	
	I0130 22:13:27.931176  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:27.933882  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934242  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:27.934277  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:27.934403  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:27.934588  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934748  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:27.934917  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:27.935106  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:27.935413  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:27.935438  680786 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-023824' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-023824/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-023824' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:28.067312  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:28.067345  680786 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:28.067368  680786 buildroot.go:174] setting up certificates
	I0130 22:13:28.067380  680786 provision.go:83] configureAuth start
	I0130 22:13:28.067389  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetMachineName
	I0130 22:13:28.067687  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.070381  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070751  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.070787  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.070891  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.073317  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073672  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.073704  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.073925  680786 provision.go:138] copyHostCerts
	I0130 22:13:28.074050  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:28.074092  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:28.074186  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:28.074311  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:28.074330  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:28.074381  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:28.074474  680786 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:28.074485  680786 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:28.074527  680786 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:28.074604  680786 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.no-preload-023824 san=[192.168.61.232 192.168.61.232 localhost 127.0.0.1 minikube no-preload-023824]
	I0130 22:13:28.175428  680786 provision.go:172] copyRemoteCerts
	I0130 22:13:28.175531  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:28.175566  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.178015  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178376  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.178416  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.178540  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.178705  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.178860  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.179029  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.265687  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:28.287768  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:28.309363  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:28.331204  680786 provision.go:86] duration metric: configureAuth took 263.811459ms
	I0130 22:13:28.331232  680786 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:28.331476  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:13:28.331568  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.333837  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334205  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.334243  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.334421  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.334626  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334804  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.334978  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.335183  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.335552  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.335569  680786 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:28.648182  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:28.648214  680786 machine.go:91] provisioned docker machine in 858.733436ms
	I0130 22:13:28.648228  680786 start.go:300] post-start starting for "no-preload-023824" (driver="kvm2")
	I0130 22:13:28.648254  680786 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:28.648272  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.648633  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:28.648669  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.651616  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.651990  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.652019  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.652200  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.652427  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.652589  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.652737  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.742644  680786 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:28.746791  680786 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:28.746818  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:28.746949  680786 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:28.747065  680786 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:28.747165  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:28.755371  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:28.776917  680786 start.go:303] post-start completed in 128.667778ms
	I0130 22:13:28.776944  680786 fix.go:56] fixHost completed within 21.474623735s
	I0130 22:13:28.776969  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.779261  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779562  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.779591  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.779715  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.779938  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780109  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.780291  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.780465  680786 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:28.780778  680786 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.232 22 <nil> <nil>}
	I0130 22:13:28.780790  680786 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:28.902234  680786 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652808.852489807
	
	I0130 22:13:28.902258  680786 fix.go:206] guest clock: 1706652808.852489807
	I0130 22:13:28.902265  680786 fix.go:219] Guest: 2024-01-30 22:13:28.852489807 +0000 UTC Remote: 2024-01-30 22:13:28.776948754 +0000 UTC m=+284.278530089 (delta=75.541053ms)
	I0130 22:13:28.902285  680786 fix.go:190] guest clock delta is within tolerance: 75.541053ms
	I0130 22:13:28.902291  680786 start.go:83] releasing machines lock for "no-preload-023824", held for 21.600013123s
	I0130 22:13:28.902314  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.902603  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:28.905058  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905455  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.905516  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.905584  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906376  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906578  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:13:28.906653  680786 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:28.906711  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.906863  680786 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:28.906902  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:13:28.909484  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909525  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909824  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909856  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909886  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:28.909902  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:28.909952  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910141  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:13:28.910150  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910347  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:13:28.910350  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:13:28.910512  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:28.910620  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:13:29.028948  680786 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:29.034774  680786 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:29.182970  680786 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:29.190306  680786 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:29.190375  680786 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:29.205114  680786 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:29.205135  680786 start.go:475] detecting cgroup driver to use...
	I0130 22:13:29.205195  680786 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:29.220998  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:29.234283  680786 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:29.234332  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:29.246205  680786 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:29.258169  680786 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:29.366756  680786 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:29.499821  680786 docker.go:233] disabling docker service ...
	I0130 22:13:29.499908  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:29.513281  680786 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:29.526823  680786 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:29.644395  680786 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:29.756912  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:29.768811  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:29.785830  680786 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:29.785897  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.794702  680786 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:29.794755  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.803342  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.812148  680786 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:29.820802  680786 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:29.830052  680786 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:29.838334  680786 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:29.838402  680786 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:29.849789  680786 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:29.858298  680786 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:29.968180  680786 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:30.134232  680786 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:30.134309  680786 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:30.139054  680786 start.go:543] Will wait 60s for crictl version
	I0130 22:13:30.139130  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.142760  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:30.183071  680786 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:30.183175  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.225981  680786 ssh_runner.go:195] Run: crio --version
	I0130 22:13:30.276982  680786 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 22:13:28.928924  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Start
	I0130 22:13:28.929139  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring networks are active...
	I0130 22:13:28.929766  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network default is active
	I0130 22:13:28.930145  680821 main.go:141] libmachine: (embed-certs-713938) Ensuring network mk-embed-certs-713938 is active
	I0130 22:13:28.930485  680821 main.go:141] libmachine: (embed-certs-713938) Getting domain xml...
	I0130 22:13:28.931095  680821 main.go:141] libmachine: (embed-certs-713938) Creating domain...
	I0130 22:13:30.162733  680821 main.go:141] libmachine: (embed-certs-713938) Waiting to get IP...
	I0130 22:13:30.163807  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.164261  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.164352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.164238  681759 retry.go:31] will retry after 217.071442ms: waiting for machine to come up
	I0130 22:13:30.382542  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.382918  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.382952  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.382899  681759 retry.go:31] will retry after 372.773352ms: waiting for machine to come up
	I0130 22:13:30.278407  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetIP
	I0130 22:13:30.281307  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281730  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:13:30.281762  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:13:30.281947  680786 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:30.285873  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:30.299947  680786 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:13:30.300015  680786 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:30.342071  680786 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 22:13:30.342094  680786 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:13:30.342198  680786 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.342218  680786 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.342257  680786 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.342278  680786 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.342288  680786 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.342205  680786 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.342265  680786 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0130 22:13:30.342563  680786 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343800  680786 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0130 22:13:30.343838  680786 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.343804  680786 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.343805  680786 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.343809  680786 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.343803  680786 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.343801  680786 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.514364  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0130 22:13:30.529476  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.537822  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.540358  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.546677  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.559021  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.559189  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.579664  680786 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.721137  680786 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0130 22:13:30.721228  680786 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.721280  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.745682  680786 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0130 22:13:30.745742  680786 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.745796  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750720  680786 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0130 22:13:30.750770  680786 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.750821  680786 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0130 22:13:30.750841  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.750854  680786 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.750897  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768135  680786 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0130 22:13:30.768182  680786 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.768199  680786 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0130 22:13:30.768243  680786 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.768289  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768303  680786 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0130 22:13:30.768246  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768384  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0130 22:13:30.768329  680786 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: which crictl
	I0130 22:13:30.768434  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0130 22:13:30.768499  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0130 22:13:30.768527  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0130 22:13:30.785074  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0130 22:13:30.785548  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:13:30.895706  680786 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0130 22:13:30.895775  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.895925  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.910469  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910496  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910549  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0130 22:13:30.910578  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:30.910584  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0130 22:13:30.910580  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:30.910664  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.910628  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:30.928331  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0130 22:13:30.928431  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:30.958095  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958123  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958140  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0130 22:13:30.958176  680786 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958205  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958178  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0130 22:13:30.958249  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0130 22:13:30.958182  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0130 22:13:30.958271  680786 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:30.958290  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0130 22:13:33.833277  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.87499883s)
	I0130 22:13:33.833318  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0130 22:13:33.833336  680786 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (2.875036585s)
	I0130 22:13:33.833372  680786 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0130 22:13:33.833366  680786 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:33.833461  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0130 22:13:30.757262  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:30.757819  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:30.757870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:30.757738  681759 retry.go:31] will retry after 414.437055ms: waiting for machine to come up
	I0130 22:13:31.174434  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.174883  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.174936  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.174831  681759 retry.go:31] will retry after 555.308421ms: waiting for machine to come up
	I0130 22:13:31.731536  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:31.732150  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:31.732188  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:31.732111  681759 retry.go:31] will retry after 484.945442ms: waiting for machine to come up
	I0130 22:13:32.218554  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:32.218989  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:32.219024  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:32.218934  681759 retry.go:31] will retry after 802.660361ms: waiting for machine to come up
	I0130 22:13:33.022920  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:33.023362  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:33.023397  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:33.023298  681759 retry.go:31] will retry after 990.694559ms: waiting for machine to come up
	I0130 22:13:34.015896  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:34.016379  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:34.016407  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:34.016345  681759 retry.go:31] will retry after 1.382435075s: waiting for machine to come up
	I0130 22:13:35.400870  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:35.401294  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:35.401327  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:35.401233  681759 retry.go:31] will retry after 1.53975085s: waiting for machine to come up
	I0130 22:13:37.909186  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.075686172s)
	I0130 22:13:37.909214  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0130 22:13:37.909257  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:37.909303  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0130 22:13:39.052225  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.142886078s)
	I0130 22:13:39.052285  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0130 22:13:39.052326  680786 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:39.052412  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0130 22:13:36.942944  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:36.943539  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:36.943580  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:36.943478  681759 retry.go:31] will retry after 1.888978312s: waiting for machine to come up
	I0130 22:13:38.834886  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:38.835467  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:38.835508  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:38.835393  681759 retry.go:31] will retry after 1.774102713s: waiting for machine to come up
	I0130 22:13:41.133330  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.080888409s)
	I0130 22:13:41.133358  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0130 22:13:41.133383  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:41.133432  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0130 22:13:43.814683  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (2.681223745s)
	I0130 22:13:43.814716  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0130 22:13:43.814742  680786 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:43.814779  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0130 22:13:40.611628  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:40.612048  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:40.612083  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:40.611995  681759 retry.go:31] will retry after 2.428322726s: waiting for machine to come up
	I0130 22:13:43.041506  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:43.041916  680821 main.go:141] libmachine: (embed-certs-713938) DBG | unable to find current IP address of domain embed-certs-713938 in network mk-embed-certs-713938
	I0130 22:13:43.041950  680821 main.go:141] libmachine: (embed-certs-713938) DBG | I0130 22:13:43.041859  681759 retry.go:31] will retry after 4.531865882s: waiting for machine to come up
	I0130 22:13:48.690103  681007 start.go:369] acquired machines lock for "default-k8s-diff-port-850803" in 4m39.687788229s
	I0130 22:13:48.690177  681007 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:13:48.690188  681007 fix.go:54] fixHost starting: 
	I0130 22:13:48.690569  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:13:48.690606  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:13:48.709730  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35781
	I0130 22:13:48.710142  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:13:48.710684  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:13:48.710714  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:13:48.711070  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:13:48.711280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:13:48.711446  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:13:48.712865  681007 fix.go:102] recreateIfNeeded on default-k8s-diff-port-850803: state=Stopped err=<nil>
	I0130 22:13:48.712909  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	W0130 22:13:48.713065  681007 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:13:48.716450  681007 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-850803" ...
	I0130 22:13:48.717867  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Start
	I0130 22:13:48.718031  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring networks are active...
	I0130 22:13:48.718700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network default is active
	I0130 22:13:48.719030  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Ensuring network mk-default-k8s-diff-port-850803 is active
	I0130 22:13:48.719391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Getting domain xml...
	I0130 22:13:48.720046  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Creating domain...
	I0130 22:13:44.761511  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0130 22:13:44.761571  680786 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:44.761627  680786 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0130 22:13:46.718526  680786 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.956864919s)
	I0130 22:13:46.718569  680786 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0130 22:13:46.718605  680786 cache_images.go:123] Successfully loaded all cached images
	I0130 22:13:46.718612  680786 cache_images.go:92] LoadImages completed in 16.376507144s
	I0130 22:13:46.718742  680786 ssh_runner.go:195] Run: crio config
	I0130 22:13:46.782286  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:13:46.782311  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:46.782332  680786 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:46.782372  680786 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.232 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-023824 NodeName:no-preload-023824 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.232"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.232 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:46.782544  680786 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.232
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-023824"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.232
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.232"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:46.782617  680786 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-023824 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.232
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:46.782674  680786 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 22:13:46.792236  680786 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:46.792309  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:46.800361  680786 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (382 bytes)
	I0130 22:13:46.816070  680786 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 22:13:46.830820  680786 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0130 22:13:46.846493  680786 ssh_runner.go:195] Run: grep 192.168.61.232	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:46.849883  680786 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.232	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:46.861414  680786 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824 for IP: 192.168.61.232
	I0130 22:13:46.861442  680786 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:46.861617  680786 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:46.861664  680786 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:46.861767  680786 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.key
	I0130 22:13:46.861831  680786 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key.e2a9f73e
	I0130 22:13:46.861872  680786 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key
	I0130 22:13:46.862006  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:46.862040  680786 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:46.862051  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:46.862074  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:46.862095  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:46.862118  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:46.862163  680786 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:46.863014  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:46.887626  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:13:46.910152  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:46.931711  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:46.953156  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:46.974390  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:46.996094  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:47.017226  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:47.038317  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:47.059119  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:47.080077  680786 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:47.101123  680786 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:47.116152  680786 ssh_runner.go:195] Run: openssl version
	I0130 22:13:47.121529  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:47.130166  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134329  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.134391  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:47.139537  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:47.148157  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:47.156558  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160623  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.160682  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:47.165652  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:47.174350  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:47.183169  680786 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187220  680786 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.187245  680786 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:47.192369  680786 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:47.201432  680786 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:47.205518  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:47.210821  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:47.216074  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:47.221255  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:47.226609  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:47.231891  680786 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:47.237220  680786 kubeadm.go:404] StartCluster: {Name:no-preload-023824 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:no-preload-023824 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:47.237355  680786 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:47.237395  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:47.277488  680786 cri.go:89] found id: ""
	I0130 22:13:47.277561  680786 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:47.286193  680786 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:47.286220  680786 kubeadm.go:636] restartCluster start
	I0130 22:13:47.286276  680786 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:47.294206  680786 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.295888  680786 kubeconfig.go:92] found "no-preload-023824" server: "https://192.168.61.232:8443"
	I0130 22:13:47.299852  680786 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:47.307350  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.307401  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.317985  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.808078  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:47.808141  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:47.819689  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.308177  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.308241  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.319138  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:48.808388  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:48.808448  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:48.819501  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:49.308165  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.308254  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.319364  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:47.577701  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578126  680821 main.go:141] libmachine: (embed-certs-713938) Found IP for machine: 192.168.72.213
	I0130 22:13:47.578150  680821 main.go:141] libmachine: (embed-certs-713938) Reserving static IP address...
	I0130 22:13:47.578166  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has current primary IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.578564  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.578605  680821 main.go:141] libmachine: (embed-certs-713938) DBG | skip adding static IP to network mk-embed-certs-713938 - found existing host DHCP lease matching {name: "embed-certs-713938", mac: "52:54:00:79:c8:41", ip: "192.168.72.213"}
	I0130 22:13:47.578616  680821 main.go:141] libmachine: (embed-certs-713938) Reserved static IP address: 192.168.72.213
	I0130 22:13:47.578630  680821 main.go:141] libmachine: (embed-certs-713938) Waiting for SSH to be available...
	I0130 22:13:47.578646  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Getting to WaitForSSH function...
	I0130 22:13:47.580757  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581084  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.581120  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.581221  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH client type: external
	I0130 22:13:47.581282  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa (-rw-------)
	I0130 22:13:47.581324  680821 main.go:141] libmachine: (embed-certs-713938) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:13:47.581344  680821 main.go:141] libmachine: (embed-certs-713938) DBG | About to run SSH command:
	I0130 22:13:47.581357  680821 main.go:141] libmachine: (embed-certs-713938) DBG | exit 0
	I0130 22:13:47.669006  680821 main.go:141] libmachine: (embed-certs-713938) DBG | SSH cmd err, output: <nil>: 
	I0130 22:13:47.669397  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetConfigRaw
	I0130 22:13:47.670084  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.672437  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.672782  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.672806  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.673048  680821 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/config.json ...
	I0130 22:13:47.673225  680821 machine.go:88] provisioning docker machine ...
	I0130 22:13:47.673243  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:47.673432  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673608  680821 buildroot.go:166] provisioning hostname "embed-certs-713938"
	I0130 22:13:47.673628  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.673766  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.675747  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676016  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.676043  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.676178  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.676351  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676484  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.676618  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.676743  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.677070  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.677083  680821 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-713938 && echo "embed-certs-713938" | sudo tee /etc/hostname
	I0130 22:13:47.800976  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-713938
	
	I0130 22:13:47.801011  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.803566  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.803876  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.803901  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.804047  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.804235  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804417  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.804537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.804699  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:47.805016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:47.805033  680821 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-713938' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-713938/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-713938' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:13:47.928846  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:13:47.928882  680821 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:13:47.928908  680821 buildroot.go:174] setting up certificates
	I0130 22:13:47.928956  680821 provision.go:83] configureAuth start
	I0130 22:13:47.928976  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetMachineName
	I0130 22:13:47.929283  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:47.931756  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932014  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.932045  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.932206  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.934351  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934647  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.934670  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.934814  680821 provision.go:138] copyHostCerts
	I0130 22:13:47.934875  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:13:47.934889  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:13:47.934963  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:13:47.935072  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:13:47.935087  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:13:47.935120  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:13:47.935196  680821 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:13:47.935206  680821 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:13:47.935234  680821 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:13:47.935349  680821 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.embed-certs-713938 san=[192.168.72.213 192.168.72.213 localhost 127.0.0.1 minikube embed-certs-713938]
	I0130 22:13:47.995543  680821 provision.go:172] copyRemoteCerts
	I0130 22:13:47.995624  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:13:47.995659  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:47.998113  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998409  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:47.998436  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:47.998636  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:47.998822  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:47.999004  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:47.999123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.086454  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:13:48.108713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:13:48.131124  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:13:48.153234  680821 provision.go:86] duration metric: configureAuth took 224.258095ms
	I0130 22:13:48.153269  680821 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:13:48.153447  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:13:48.153554  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.156268  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156673  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.156705  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.156847  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.157070  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157294  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.157481  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.157649  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.158119  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.158143  680821 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:13:48.449095  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:13:48.449131  680821 machine.go:91] provisioned docker machine in 775.890813ms
	I0130 22:13:48.449146  680821 start.go:300] post-start starting for "embed-certs-713938" (driver="kvm2")
	I0130 22:13:48.449161  680821 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:13:48.449185  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.449573  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:13:48.449605  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.452408  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.452831  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.452866  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.453009  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.453240  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.453416  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.453566  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.539764  680821 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:13:48.543876  680821 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:13:48.543905  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:13:48.543969  680821 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:13:48.544045  680821 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:13:48.544163  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:13:48.552947  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:48.573560  680821 start.go:303] post-start completed in 124.400867ms
	I0130 22:13:48.573588  680821 fix.go:56] fixHost completed within 19.671118722s
	I0130 22:13:48.573615  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.576352  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576755  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.576777  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.576965  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.577170  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577337  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.577537  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.577708  680821 main.go:141] libmachine: Using SSH client type: native
	I0130 22:13:48.578016  680821 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.72.213 22 <nil> <nil>}
	I0130 22:13:48.578029  680821 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:13:48.689910  680821 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652828.640343702
	
	I0130 22:13:48.689937  680821 fix.go:206] guest clock: 1706652828.640343702
	I0130 22:13:48.689948  680821 fix.go:219] Guest: 2024-01-30 22:13:48.640343702 +0000 UTC Remote: 2024-01-30 22:13:48.573593176 +0000 UTC m=+303.018932163 (delta=66.750526ms)
	I0130 22:13:48.690012  680821 fix.go:190] guest clock delta is within tolerance: 66.750526ms
	I0130 22:13:48.690023  680821 start.go:83] releasing machines lock for "embed-certs-713938", held for 19.787596053s
	I0130 22:13:48.690062  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.690367  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:48.692836  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693147  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.693180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.693372  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.693895  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694095  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:13:48.694178  680821 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:13:48.694232  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.694331  680821 ssh_runner.go:195] Run: cat /version.json
	I0130 22:13:48.694354  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:13:48.696786  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697137  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697180  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697205  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697357  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697529  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.697648  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:48.697675  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:48.697706  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.697830  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:13:48.697910  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.697985  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:13:48.698143  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:13:48.698307  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:13:48.807627  680821 ssh_runner.go:195] Run: systemctl --version
	I0130 22:13:48.813332  680821 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:13:48.953919  680821 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:13:48.960672  680821 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:13:48.960744  680821 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:13:48.977684  680821 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:13:48.977702  680821 start.go:475] detecting cgroup driver to use...
	I0130 22:13:48.977766  680821 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:13:48.989811  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:13:49.001223  680821 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:13:49.001281  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:13:49.012649  680821 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:13:49.024426  680821 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:13:49.130220  680821 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:13:49.248922  680821 docker.go:233] disabling docker service ...
	I0130 22:13:49.248999  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:13:49.262066  680821 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:13:49.272736  680821 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:13:49.394001  680821 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:13:49.514043  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:13:49.526282  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:13:49.545253  680821 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:13:49.545303  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.554715  680821 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:13:49.554775  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.564248  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.573151  680821 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:13:49.582148  680821 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:13:49.591604  680821 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:13:49.599683  680821 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:13:49.599722  680821 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:13:49.611807  680821 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:13:49.622179  680821 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:13:49.745824  680821 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:13:49.924707  680821 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:13:49.924788  680821 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:13:49.930158  680821 start.go:543] Will wait 60s for crictl version
	I0130 22:13:49.930234  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:13:49.933971  680821 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:13:49.973662  680821 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:13:49.973736  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.018705  680821 ssh_runner.go:195] Run: crio --version
	I0130 22:13:50.070907  680821 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:13:50.072352  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetIP
	I0130 22:13:50.075100  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075487  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:13:50.075519  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:13:50.075750  680821 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0130 22:13:50.079538  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:50.093965  680821 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:13:50.094028  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:50.133425  680821 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:13:50.133506  680821 ssh_runner.go:195] Run: which lz4
	I0130 22:13:50.137267  680821 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:13:50.141273  680821 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:13:50.141299  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:13:49.938197  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting to get IP...
	I0130 22:13:49.939301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939717  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:49.939806  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:49.939711  681876 retry.go:31] will retry after 300.092754ms: waiting for machine to come up
	I0130 22:13:50.241301  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241860  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.241890  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.241804  681876 retry.go:31] will retry after 313.990905ms: waiting for machine to come up
	I0130 22:13:50.557661  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:50.558161  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:50.558077  681876 retry.go:31] will retry after 484.197655ms: waiting for machine to come up
	I0130 22:13:51.043815  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044313  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.044345  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.044255  681876 retry.go:31] will retry after 595.208415ms: waiting for machine to come up
	I0130 22:13:51.640765  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641244  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:51.641281  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:51.641207  681876 retry.go:31] will retry after 646.272845ms: waiting for machine to come up
	I0130 22:13:52.288980  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:52.289729  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:52.289599  681876 retry.go:31] will retry after 864.623353ms: waiting for machine to come up
	I0130 22:13:53.155328  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155826  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:53.155865  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:53.155750  681876 retry.go:31] will retry after 943.126628ms: waiting for machine to come up
	I0130 22:13:49.807842  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:49.807941  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:49.826075  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.308394  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.308476  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.323858  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:50.807449  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:50.807538  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:50.823237  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.307590  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.307684  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.322999  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:51.807466  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:51.807551  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:51.822502  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.308300  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.308431  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.329435  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.808248  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:52.808379  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:52.823821  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.308375  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.308462  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.321178  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:53.807637  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:53.807748  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:53.823761  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:54.308223  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.308300  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.320791  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:52.023827  680821 crio.go:444] Took 1.886590 seconds to copy over tarball
	I0130 22:13:52.023892  680821 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:13:55.116587  680821 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.092664003s)
	I0130 22:13:55.116614  680821 crio.go:451] Took 3.092762 seconds to extract the tarball
	I0130 22:13:55.116644  680821 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:13:55.159215  680821 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:13:55.210233  680821 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:13:55.210263  680821 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:13:55.210344  680821 ssh_runner.go:195] Run: crio config
	I0130 22:13:55.268468  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:13:55.268496  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:13:55.268519  680821 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:13:55.268545  680821 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-713938 NodeName:embed-certs-713938 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:13:55.268710  680821 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-713938"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:13:55.268801  680821 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-713938 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:13:55.268880  680821 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:13:55.278244  680821 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:13:55.278321  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:13:55.287034  680821 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0130 22:13:55.302012  680821 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:13:55.318716  680821 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2105 bytes)
	I0130 22:13:55.335364  680821 ssh_runner.go:195] Run: grep 192.168.72.213	control-plane.minikube.internal$ /etc/hosts
	I0130 22:13:55.338950  680821 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:13:55.349780  680821 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938 for IP: 192.168.72.213
	I0130 22:13:55.349814  680821 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:13:55.350000  680821 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:13:55.350058  680821 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:13:55.350157  680821 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/client.key
	I0130 22:13:55.350242  680821 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key.0982f839
	I0130 22:13:55.350299  680821 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key
	I0130 22:13:55.350469  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:13:55.350520  680821 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:13:55.350539  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:13:55.350577  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:13:55.350612  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:13:55.350648  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:13:55.350707  680821 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:13:55.351807  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:13:55.373160  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 22:13:55.394634  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:13:55.416281  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/embed-certs-713938/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:13:55.438713  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:13:55.460324  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:13:55.481480  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:13:55.502869  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:13:55.524520  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:13:55.547601  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:13:55.569483  680821 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:13:55.590741  680821 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:13:54.100347  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:54.100841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:54.100763  681876 retry.go:31] will retry after 1.412406258s: waiting for machine to come up
	I0130 22:13:55.514929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515302  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:55.515362  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:55.515267  681876 retry.go:31] will retry after 1.440442596s: waiting for machine to come up
	I0130 22:13:56.957895  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958367  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:56.958390  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:56.958326  681876 retry.go:31] will retry after 1.996277334s: waiting for machine to come up
	I0130 22:13:54.807936  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:54.808021  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:54.824410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.307845  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.307937  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.320645  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:55.808272  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:55.808384  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:55.820051  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.307482  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.307567  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.319410  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.808044  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.808167  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.820440  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.308301  680786 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.308409  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.323612  680786 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.323650  680786 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:13:57.323715  680786 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:13:57.323733  680786 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:13:57.323798  680786 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:57.364379  680786 cri.go:89] found id: ""
	I0130 22:13:57.364467  680786 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:13:57.380175  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:13:57.390701  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:13:57.390770  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400039  680786 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:13:57.400071  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:57.546658  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.567155  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.020447474s)
	I0130 22:13:58.567192  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.794332  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.875254  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:13:58.943890  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:13:58.944000  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:59.444721  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:13:55.608619  680821 ssh_runner.go:195] Run: openssl version
	I0130 22:13:55.880188  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:13:55.890762  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895346  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.895423  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:13:55.900872  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:13:55.911050  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:13:55.921117  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925362  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.925410  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:13:55.930499  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:13:55.940167  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:13:55.950284  680821 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954643  680821 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.954688  680821 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:13:55.959830  680821 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:13:55.969573  680821 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:13:55.973654  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:13:55.980878  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:13:55.988262  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:13:55.995379  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:13:56.002387  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:13:56.007729  680821 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:13:56.013164  680821 kubeadm.go:404] StartCluster: {Name:embed-certs-713938 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.4 ClusterName:embed-certs-713938 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:13:56.013256  680821 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:13:56.013290  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:13:56.054588  680821 cri.go:89] found id: ""
	I0130 22:13:56.054670  680821 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:13:56.064691  680821 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:13:56.064720  680821 kubeadm.go:636] restartCluster start
	I0130 22:13:56.064781  680821 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:13:56.074132  680821 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.075653  680821 kubeconfig.go:92] found "embed-certs-713938" server: "https://192.168.72.213:8443"
	I0130 22:13:56.078677  680821 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:13:56.087919  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.087968  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.099213  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:56.588843  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:56.588940  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:56.601681  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.088185  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.088291  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.103229  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:57.588880  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:57.589012  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:57.604127  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.088751  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.088880  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.100833  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.588147  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:58.588264  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:58.604368  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.088571  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.088681  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.104028  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:59.588569  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:13:59.588684  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:13:59.602995  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.088596  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.088729  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.104195  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:00.588883  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:00.588987  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:00.605168  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:13:58.956101  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956568  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:13:58.956598  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:13:58.956511  681876 retry.go:31] will retry after 2.859682959s: waiting for machine to come up
	I0130 22:14:01.819863  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820443  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:01.820476  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:01.820388  681876 retry.go:31] will retry after 2.840054468s: waiting for machine to come up
	I0130 22:13:59.945172  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.444900  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:00.945042  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.444410  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:01.486688  680786 api_server.go:72] duration metric: took 2.54280014s to wait for apiserver process to appear ...
	I0130 22:14:01.486719  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:01.486775  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.487585  680786 api_server.go:269] stopped: https://192.168.61.232:8443/healthz: Get "https://192.168.61.232:8443/healthz": dial tcp 192.168.61.232:8443: connect: connection refused
	I0130 22:14:01.987279  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:01.088999  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.089091  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.104740  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:01.588046  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:01.588171  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:01.603186  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.088381  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.088495  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.104148  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:02.588728  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:02.588850  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:02.603782  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.088297  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.088396  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.101192  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:03.588856  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:03.588967  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:03.600516  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.088592  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.088688  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.101572  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.588042  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:04.588181  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:04.600890  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.088324  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.088437  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.103896  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:05.588678  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:05.588786  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:05.604329  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:04.974310  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:04.974343  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:04.974361  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.032790  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.032856  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.032882  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.052788  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:05.052811  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:05.487474  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.494053  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.494084  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:05.987783  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:05.994015  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:05.994049  680786 api_server.go:103] status: https://192.168.61.232:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:06.487723  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:14:06.492959  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:14:06.500169  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:14:06.500208  680786 api_server.go:131] duration metric: took 5.013479999s to wait for apiserver health ...
	I0130 22:14:06.500221  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:14:06.500230  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:06.502253  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:04.661649  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.661976  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | unable to find current IP address of domain default-k8s-diff-port-850803 in network mk-default-k8s-diff-port-850803
	I0130 22:14:04.662010  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | I0130 22:14:04.661932  681876 retry.go:31] will retry after 4.414855002s: waiting for machine to come up
	I0130 22:14:06.503764  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:06.514909  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:06.534344  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:06.546282  680786 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:06.546323  680786 system_pods.go:61] "coredns-76f75df574-cvjdk" [3f6526d5-7bf6-4d51-96bc-9dc6f70ead98] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:06.546333  680786 system_pods.go:61] "etcd-no-preload-023824" [89ebff7a-3ac5-4aa7-aab7-9c61e59027a5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:06.546352  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [bea4217d-ad4c-4945-ac59-1589976698e2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:06.546369  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [4a1866ae-14ce-4132-bc99-225c518ab4bb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:06.546394  680786 system_pods.go:61] "kube-proxy-phh5j" [3e662e91-7886-44e7-87a0-4a727011062a] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:06.546407  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [ad7a7f1c-6aa6-4e16-94d5-e5db7d3e39f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:06.546422  680786 system_pods.go:61] "metrics-server-57f55c9bc5-qfj5x" [13ae9773-8607-43ae-a122-4f97b367a954] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:06.546433  680786 system_pods.go:61] "storage-provisioner" [50dd4d19-5e05-47b7-a11f-5975bc6ef0e2] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:06.546445  680786 system_pods.go:74] duration metric: took 12.076118ms to wait for pod list to return data ...
	I0130 22:14:06.546458  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:06.549604  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:06.549634  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:06.549645  680786 node_conditions.go:105] duration metric: took 3.179552ms to run NodePressure ...
	I0130 22:14:06.549662  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.858172  680786 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863712  680786 kubeadm.go:787] kubelet initialised
	I0130 22:14:06.863731  680786 kubeadm.go:788] duration metric: took 5.530573ms waiting for restarted kubelet to initialise ...
	I0130 22:14:06.863738  680786 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:06.869540  680786 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:08.886275  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:10.543927  680506 start.go:369] acquired machines lock for "old-k8s-version-912992" in 58.237287777s
	I0130 22:14:10.543984  680506 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:14:10.543993  680506 fix.go:54] fixHost starting: 
	I0130 22:14:10.544466  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:14:10.544494  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:14:10.563544  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37801
	I0130 22:14:10.564063  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:14:10.564683  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:14:10.564705  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:14:10.565128  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:14:10.565338  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:10.565526  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:14:10.567290  680506 fix.go:102] recreateIfNeeded on old-k8s-version-912992: state=Stopped err=<nil>
	I0130 22:14:10.567314  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	W0130 22:14:10.567565  680506 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:14:10.569441  680506 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-912992" ...
	I0130 22:14:06.089016  680821 api_server.go:166] Checking apiserver status ...
	I0130 22:14:06.089138  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:06.101226  680821 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:06.101265  680821 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:06.101276  680821 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:06.101292  680821 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:06.101373  680821 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:06.145816  680821 cri.go:89] found id: ""
	I0130 22:14:06.145935  680821 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:06.162118  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:06.174308  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:06.174379  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186134  680821 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:06.186164  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.312544  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:06.860323  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.068181  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.151741  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:07.236354  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:07.236461  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:07.737169  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.237398  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:08.737483  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.237152  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.736646  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:09.763936  680821 api_server.go:72] duration metric: took 2.527584407s to wait for apiserver process to appear ...
	I0130 22:14:09.763962  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:09.763991  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:09.078352  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078935  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Found IP for machine: 192.168.50.254
	I0130 22:14:09.078985  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has current primary IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.078997  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserving static IP address...
	I0130 22:14:09.079366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.079391  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | skip adding static IP to network mk-default-k8s-diff-port-850803 - found existing host DHCP lease matching {name: "default-k8s-diff-port-850803", mac: "52:54:00:b1:7c:86", ip: "192.168.50.254"}
	I0130 22:14:09.079411  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Getting to WaitForSSH function...
	I0130 22:14:09.079431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Reserved static IP address: 192.168.50.254
	I0130 22:14:09.079442  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Waiting for SSH to be available...
	I0130 22:14:09.082189  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082612  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.082638  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.082892  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH client type: external
	I0130 22:14:09.082917  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa (-rw-------)
	I0130 22:14:09.082982  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.254 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:09.082996  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | About to run SSH command:
	I0130 22:14:09.083009  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | exit 0
	I0130 22:14:09.182746  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:09.183304  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetConfigRaw
	I0130 22:14:09.184088  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.187115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187576  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.187606  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.187972  681007 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/config.json ...
	I0130 22:14:09.188234  681007 machine.go:88] provisioning docker machine ...
	I0130 22:14:09.188262  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:09.188470  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188648  681007 buildroot.go:166] provisioning hostname "default-k8s-diff-port-850803"
	I0130 22:14:09.188670  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.188822  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.191366  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191769  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.191808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.191929  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.192148  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192332  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.192488  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.192732  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.193245  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.193273  681007 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-850803 && echo "default-k8s-diff-port-850803" | sudo tee /etc/hostname
	I0130 22:14:09.344664  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-850803
	
	I0130 22:14:09.344700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.348016  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348485  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.348516  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.348685  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.348962  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349123  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.349280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.349505  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.349996  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.350025  681007 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-850803' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-850803/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-850803' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:09.490740  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:09.490778  681007 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:09.490812  681007 buildroot.go:174] setting up certificates
	I0130 22:14:09.490825  681007 provision.go:83] configureAuth start
	I0130 22:14:09.490844  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetMachineName
	I0130 22:14:09.491225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:09.494577  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495040  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.495085  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.495194  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.497931  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498407  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.498433  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.498638  681007 provision.go:138] copyHostCerts
	I0130 22:14:09.498702  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:09.498717  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:09.498778  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:09.498898  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:09.498912  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:09.498955  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:09.499039  681007 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:09.499052  681007 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:09.499080  681007 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:09.499147  681007 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-850803 san=[192.168.50.254 192.168.50.254 localhost 127.0.0.1 minikube default-k8s-diff-port-850803]
	I0130 22:14:09.749739  681007 provision.go:172] copyRemoteCerts
	I0130 22:14:09.749810  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:09.749848  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.753032  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753498  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.753533  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.753727  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.753945  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.754170  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.754364  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:09.851640  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:09.879906  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0130 22:14:09.907030  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:09.934916  681007 provision.go:86] duration metric: configureAuth took 444.054165ms
	I0130 22:14:09.934954  681007 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:09.935190  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:14:09.935324  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:09.938507  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.938854  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:09.938894  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:09.939068  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:09.939312  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939517  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:09.939700  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:09.939899  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:09.940390  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:09.940421  681007 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:10.275894  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:10.275935  681007 machine.go:91] provisioned docker machine in 1.087679661s
	I0130 22:14:10.275950  681007 start.go:300] post-start starting for "default-k8s-diff-port-850803" (driver="kvm2")
	I0130 22:14:10.275965  681007 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:10.275989  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.276387  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:10.276445  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.279676  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280069  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.280115  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.280364  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.280584  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.280766  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.280923  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.373204  681007 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:10.377609  681007 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:10.377637  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:10.377705  681007 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:10.377773  681007 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:10.377857  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:10.388096  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:10.414529  681007 start.go:303] post-start completed in 138.561717ms
	I0130 22:14:10.414557  681007 fix.go:56] fixHost completed within 21.7243684s
	I0130 22:14:10.414586  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.417282  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417709  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.417741  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.417872  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.418063  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418233  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.418356  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.418555  681007 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:10.419070  681007 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.50.254 22 <nil> <nil>}
	I0130 22:14:10.419086  681007 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:10.543719  681007 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652850.477584158
	
	I0130 22:14:10.543751  681007 fix.go:206] guest clock: 1706652850.477584158
	I0130 22:14:10.543762  681007 fix.go:219] Guest: 2024-01-30 22:14:10.477584158 +0000 UTC Remote: 2024-01-30 22:14:10.414562089 +0000 UTC m=+301.564256760 (delta=63.022069ms)
	I0130 22:14:10.543828  681007 fix.go:190] guest clock delta is within tolerance: 63.022069ms
	I0130 22:14:10.543837  681007 start.go:83] releasing machines lock for "default-k8s-diff-port-850803", held for 21.853682485s
	I0130 22:14:10.543884  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.544172  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:10.547453  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.547833  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.547907  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.548185  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554556  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:14:10.554902  681007 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:10.554975  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.555050  681007 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:10.555093  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:14:10.558413  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559108  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559387  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559438  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.559764  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:10.559808  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.559857  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:10.560050  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560137  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:14:10.560224  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560350  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:14:10.560579  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:14:10.560578  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.560760  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:14:10.681106  681007 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:10.688790  681007 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:10.845108  681007 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:10.853366  681007 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:10.853540  681007 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:10.873299  681007 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:10.873326  681007 start.go:475] detecting cgroup driver to use...
	I0130 22:14:10.873426  681007 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:10.891563  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:10.908180  681007 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:10.908258  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:10.921344  681007 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:10.935068  681007 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:11.036505  681007 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:11.151640  681007 docker.go:233] disabling docker service ...
	I0130 22:14:11.151718  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:11.167082  681007 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:11.178680  681007 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:11.303325  681007 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:11.410097  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:11.426297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:11.452546  681007 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:14:11.452634  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.463081  681007 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:11.463156  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.472742  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.482828  681007 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:11.494761  681007 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:11.507028  681007 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:11.517686  681007 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:11.517742  681007 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:11.530301  681007 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:11.541975  681007 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:11.696623  681007 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:11.913271  681007 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:11.913391  681007 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:11.919870  681007 start.go:543] Will wait 60s for crictl version
	I0130 22:14:11.919944  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:14:11.926064  681007 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:11.975070  681007 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:11.975177  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.033039  681007 ssh_runner.go:195] Run: crio --version
	I0130 22:14:12.081059  681007 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:14:10.570784  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Start
	I0130 22:14:10.571067  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring networks are active...
	I0130 22:14:10.571790  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network default is active
	I0130 22:14:10.572160  680506 main.go:141] libmachine: (old-k8s-version-912992) Ensuring network mk-old-k8s-version-912992 is active
	I0130 22:14:10.572697  680506 main.go:141] libmachine: (old-k8s-version-912992) Getting domain xml...
	I0130 22:14:10.573411  680506 main.go:141] libmachine: (old-k8s-version-912992) Creating domain...
	I0130 22:14:11.948333  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting to get IP...
	I0130 22:14:11.949455  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:11.950018  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:11.950060  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:11.949981  682021 retry.go:31] will retry after 276.511731ms: waiting for machine to come up
	I0130 22:14:12.228702  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.229508  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.229544  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.229445  682021 retry.go:31] will retry after 291.918453ms: waiting for machine to come up
	I0130 22:14:12.522882  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.523484  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.523520  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.523451  682021 retry.go:31] will retry after 411.891157ms: waiting for machine to come up
	I0130 22:14:12.082431  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetIP
	I0130 22:14:12.085750  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086144  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:14:12.086175  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:14:12.086400  681007 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:12.091494  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:12.104832  681007 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:14:12.104904  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:12.160529  681007 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:14:12.160610  681007 ssh_runner.go:195] Run: which lz4
	I0130 22:14:12.165037  681007 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:12.169743  681007 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:12.169772  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:14:11.379194  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.394473  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:13.254742  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.254788  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.254809  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.438140  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.438192  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.438210  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.470956  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:13.470985  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:13.764535  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:13.773346  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:13.773385  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.264393  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.277818  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:14.277878  680821 api_server.go:103] status: https://192.168.72.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:14.764145  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:14:14.769720  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:14:14.778872  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:14.778910  680821 api_server.go:131] duration metric: took 5.01493889s to wait for apiserver health ...
	I0130 22:14:14.778923  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:14:14.778931  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:14.780880  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:14.782682  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:14.798955  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:14.824975  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:14.841121  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:14.841166  680821 system_pods.go:61] "coredns-5dd5756b68-wcncl" [43c0f4bc-1d47-4337-a179-bb27a4164ca5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:14.841177  680821 system_pods.go:61] "etcd-embed-certs-713938" [f8c3bfda-0fca-429b-a0a2-b4fc1d496085] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:14.841196  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [7536531d-a1bd-451b-8530-143f9a41b85c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:14.841209  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [76c2d0eb-823a-41df-91dc-584acb56f81e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:14.841222  680821 system_pods.go:61] "kube-proxy-4c6nn" [253bee90-32a4-4dc0-9db7-bdfa663bcc96] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:14.841233  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [3b4e8324-e074-45ab-b24c-df1bd226e12e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:14.841247  680821 system_pods.go:61] "metrics-server-57f55c9bc5-hcg7l" [25906794-7927-48cf-8f80-52f8a2a68d99] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:14.841265  680821 system_pods.go:61] "storage-provisioner" [5820d2a9-be84-42e8-ac25-d4ac1cf22d90] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:14.841275  680821 system_pods.go:74] duration metric: took 16.275602ms to wait for pod list to return data ...
	I0130 22:14:14.841289  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:14.848145  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:14.848183  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:14.848198  680821 node_conditions.go:105] duration metric: took 6.903129ms to run NodePressure ...
	I0130 22:14:14.848221  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:15.186295  680821 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191845  680821 kubeadm.go:787] kubelet initialised
	I0130 22:14:15.191872  680821 kubeadm.go:788] duration metric: took 5.54389ms waiting for restarted kubelet to initialise ...
	I0130 22:14:15.191883  680821 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:15.202037  680821 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:12.937414  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:12.938094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:12.938126  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:12.937994  682021 retry.go:31] will retry after 576.497569ms: waiting for machine to come up
	I0130 22:14:13.515903  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:13.516521  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:13.516547  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:13.516421  682021 retry.go:31] will retry after 519.706227ms: waiting for machine to come up
	I0130 22:14:14.037307  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.037937  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.037967  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.037845  682021 retry.go:31] will retry after 797.706186ms: waiting for machine to come up
	I0130 22:14:14.836997  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:14.837662  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:14.837686  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:14.837561  682021 retry.go:31] will retry after 782.265584ms: waiting for machine to come up
	I0130 22:14:15.621147  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:15.621747  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:15.621779  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:15.621706  682021 retry.go:31] will retry after 1.00093966s: waiting for machine to come up
	I0130 22:14:16.624002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:16.624474  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:16.624506  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:16.624365  682021 retry.go:31] will retry after 1.760162378s: waiting for machine to come up
	I0130 22:14:14.166451  681007 crio.go:444] Took 2.001438 seconds to copy over tarball
	I0130 22:14:14.166549  681007 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:17.707309  681007 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.540722039s)
	I0130 22:14:17.707346  681007 crio.go:451] Took 3.540858 seconds to extract the tarball
	I0130 22:14:17.707367  681007 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:17.751814  681007 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:17.817529  681007 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:14:17.817564  681007 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:14:17.817650  681007 ssh_runner.go:195] Run: crio config
	I0130 22:14:17.882693  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:17.882719  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:17.882745  681007 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:17.882777  681007 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.254 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-850803 NodeName:default-k8s-diff-port-850803 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:14:17.882963  681007 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.254
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-850803"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:17.883060  681007 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-850803 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0130 22:14:17.883125  681007 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:14:17.895645  681007 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:17.895725  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:17.906009  681007 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0130 22:14:17.923445  681007 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:17.941439  681007 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0130 22:14:17.958729  681007 ssh_runner.go:195] Run: grep 192.168.50.254	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:17.962941  681007 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:17.975030  681007 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803 for IP: 192.168.50.254
	I0130 22:14:17.975065  681007 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:17.975251  681007 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:17.975300  681007 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:17.975377  681007 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.key
	I0130 22:14:17.975436  681007 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key.c40bdd21
	I0130 22:14:17.975471  681007 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key
	I0130 22:14:17.975603  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:17.975634  681007 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:17.975642  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:17.975665  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:17.975689  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:17.975714  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:17.975751  681007 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:17.976423  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:18.003363  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:18.029597  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:18.053558  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:14:18.077340  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:18.100959  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:18.124756  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:18.148266  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:18.171688  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:18.195020  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:18.221728  681007 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:18.245353  681007 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:18.262630  681007 ssh_runner.go:195] Run: openssl version
	I0130 22:14:18.268255  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:18.279361  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284264  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.284318  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:18.290374  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:18.301414  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:18.312992  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317776  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.317826  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:18.323596  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:18.334360  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:18.346052  681007 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350871  681007 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.350917  681007 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:18.358340  681007 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:18.371640  681007 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:18.376906  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:18.383780  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:18.390468  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:18.396506  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:18.402525  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:18.407949  681007 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:18.413375  681007 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-850803 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:default-k8s-diff-port-850803 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:18.413454  681007 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:18.413546  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:18.460309  681007 cri.go:89] found id: ""
	I0130 22:14:18.460393  681007 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:18.474036  681007 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:18.474062  681007 kubeadm.go:636] restartCluster start
	I0130 22:14:18.474153  681007 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:18.484682  681007 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:18.486004  681007 kubeconfig.go:92] found "default-k8s-diff-port-850803" server: "https://192.168.50.254:8444"
	I0130 22:14:18.488661  681007 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:18.499334  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:18.499389  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:18.512812  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:15.878232  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.047391  680786 pod_ready.go:102] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:17.215329  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:19.367292  680821 pod_ready.go:102] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:18.386828  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:18.387291  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:18.387324  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:18.387230  682021 retry.go:31] will retry after 1.961289931s: waiting for machine to come up
	I0130 22:14:20.351407  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:20.351939  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:20.351975  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:20.351883  682021 retry.go:31] will retry after 2.41188295s: waiting for machine to come up
	I0130 22:14:18.999791  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.011386  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.025823  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.499386  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:19.499505  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:19.513098  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.000365  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.000469  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.017498  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:20.500160  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:20.500286  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:20.517695  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.000275  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.000409  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.017613  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:21.499881  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:21.499974  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:21.516790  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.000448  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.000562  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.014377  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.499900  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.500014  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:22.513212  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:22.999725  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:22.999875  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.013983  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:23.499549  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.499654  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:23.515308  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:19.554357  680786 pod_ready.go:92] pod "coredns-76f75df574-cvjdk" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.685256  680786 pod_ready.go:81] duration metric: took 12.815676408s waiting for pod "coredns-76f75df574-cvjdk" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.685298  680786 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705805  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.705843  680786 pod_ready.go:81] duration metric: took 20.535204ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.705859  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716827  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:19.716859  680786 pod_ready.go:81] duration metric: took 10.990465ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:19.716873  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224601  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.224631  680786 pod_ready.go:81] duration metric: took 507.749018ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.224648  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231481  680786 pod_ready.go:92] pod "kube-proxy-phh5j" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.231507  680786 pod_ready.go:81] duration metric: took 6.849925ms waiting for pod "kube-proxy-phh5j" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.231519  680786 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237347  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:20.237372  680786 pod_ready.go:81] duration metric: took 5.84531ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:20.237383  680786 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.246204  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:24.248275  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:21.709185  680821 pod_ready.go:92] pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:21.709226  680821 pod_ready.go:81] duration metric: took 6.507155774s waiting for pod "coredns-5dd5756b68-wcncl" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:21.709240  680821 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716371  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.716398  680821 pod_ready.go:81] duration metric: took 2.007151614s waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.716407  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722781  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.722803  680821 pod_ready.go:81] duration metric: took 6.390258ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.722814  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729034  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.729055  680821 pod_ready.go:81] duration metric: took 6.235103ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.729063  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737325  680821 pod_ready.go:92] pod "kube-proxy-4c6nn" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.737348  680821 pod_ready.go:81] duration metric: took 8.279273ms waiting for pod "kube-proxy-4c6nn" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.737361  680821 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.742989  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:23.743013  680821 pod_ready.go:81] duration metric: took 5.643901ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:23.743024  680821 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:22.766642  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:22.767267  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:22.767359  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:22.767247  682021 retry.go:31] will retry after 2.473522194s: waiting for machine to come up
	I0130 22:14:25.242661  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:25.243221  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | unable to find current IP address of domain old-k8s-version-912992 in network mk-old-k8s-version-912992
	I0130 22:14:25.243246  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | I0130 22:14:25.243168  682021 retry.go:31] will retry after 4.117858968s: waiting for machine to come up
	I0130 22:14:23.999813  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:23.999897  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.012879  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.499381  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.499457  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:24.513834  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:24.999458  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:24.999554  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.014779  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.499957  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.500093  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:25.513275  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:25.999800  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:25.999901  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.011952  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.499447  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.499530  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:26.511962  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:26.999473  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:26.999579  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.012316  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:27.499767  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:27.499862  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:27.511793  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.000036  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.000127  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.012698  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.499393  681007 api_server.go:166] Checking apiserver status ...
	I0130 22:14:28.499495  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:28.511459  681007 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:28.511494  681007 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:28.511507  681007 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:28.511522  681007 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:28.511593  681007 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:28.550124  681007 cri.go:89] found id: ""
	I0130 22:14:28.550200  681007 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:28.566091  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:28.575952  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:28.576019  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584539  681007 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:28.584559  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:28.715666  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:26.744291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.744825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:25.752959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:28.250440  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:30.251820  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:29.365529  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366106  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has current primary IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.366142  680506 main.go:141] libmachine: (old-k8s-version-912992) Found IP for machine: 192.168.39.84
	I0130 22:14:29.366157  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserving static IP address...
	I0130 22:14:29.366732  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.366763  680506 main.go:141] libmachine: (old-k8s-version-912992) Reserved static IP address: 192.168.39.84
	I0130 22:14:29.366789  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | skip adding static IP to network mk-old-k8s-version-912992 - found existing host DHCP lease matching {name: "old-k8s-version-912992", mac: "52:54:00:ae:10:1a", ip: "192.168.39.84"}
	I0130 22:14:29.366805  680506 main.go:141] libmachine: (old-k8s-version-912992) Waiting for SSH to be available...
	I0130 22:14:29.366820  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Getting to WaitForSSH function...
	I0130 22:14:29.369195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369625  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.369648  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.369851  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH client type: external
	I0130 22:14:29.369899  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa (-rw-------)
	I0130 22:14:29.369956  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.84 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:14:29.369986  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | About to run SSH command:
	I0130 22:14:29.370002  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | exit 0
	I0130 22:14:29.469381  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | SSH cmd err, output: <nil>: 
	I0130 22:14:29.469800  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetConfigRaw
	I0130 22:14:29.470597  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.473253  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.473721  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.473748  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.474114  680506 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/config.json ...
	I0130 22:14:29.474312  680506 machine.go:88] provisioning docker machine ...
	I0130 22:14:29.474333  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:29.474552  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474741  680506 buildroot.go:166] provisioning hostname "old-k8s-version-912992"
	I0130 22:14:29.474767  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.474946  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.477297  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477636  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.477677  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.477927  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.478188  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478383  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.478541  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.478761  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.479265  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.479291  680506 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-912992 && echo "old-k8s-version-912992" | sudo tee /etc/hostname
	I0130 22:14:29.626924  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-912992
	
	I0130 22:14:29.626957  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.630607  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631062  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.631094  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.631278  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:29.631514  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631696  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:29.631891  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:29.632111  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:29.632505  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:29.632524  680506 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-912992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-912992/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-912992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:14:29.777390  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:14:29.777424  680506 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:14:29.777450  680506 buildroot.go:174] setting up certificates
	I0130 22:14:29.777484  680506 provision.go:83] configureAuth start
	I0130 22:14:29.777504  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetMachineName
	I0130 22:14:29.777846  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:29.781195  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781632  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.781682  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.781860  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:29.784395  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784744  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:29.784776  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:29.784895  680506 provision.go:138] copyHostCerts
	I0130 22:14:29.784960  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:14:29.784973  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:14:29.785039  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:14:29.785139  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:14:29.785148  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:14:29.785173  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:14:29.785231  680506 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:14:29.785240  680506 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:14:29.785263  680506 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:14:29.785404  680506 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-912992 san=[192.168.39.84 192.168.39.84 localhost 127.0.0.1 minikube old-k8s-version-912992]
	I0130 22:14:30.047520  680506 provision.go:172] copyRemoteCerts
	I0130 22:14:30.047582  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:14:30.047607  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.050409  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050757  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.050790  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.050992  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.051204  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.051345  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.051517  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.143197  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:14:30.164424  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0130 22:14:30.185497  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0130 22:14:30.207694  680506 provision.go:86] duration metric: configureAuth took 430.192351ms
	I0130 22:14:30.207731  680506 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:14:30.207938  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:14:30.208031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.210616  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.210984  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.211029  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.211184  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.211404  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211560  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.211689  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.211838  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.212146  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.212161  680506 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:14:30.548338  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:14:30.548369  680506 machine.go:91] provisioned docker machine in 1.074040133s
	I0130 22:14:30.548384  680506 start.go:300] post-start starting for "old-k8s-version-912992" (driver="kvm2")
	I0130 22:14:30.548397  680506 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:14:30.548418  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.548802  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:14:30.548859  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.552482  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.552909  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.552945  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.553163  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.553368  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.553563  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.553702  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.649611  680506 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:14:30.654369  680506 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:14:30.654398  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:14:30.654527  680506 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:14:30.654606  680506 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:14:30.654692  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:14:30.664288  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:30.687603  680506 start.go:303] post-start completed in 139.202965ms
	I0130 22:14:30.687635  680506 fix.go:56] fixHost completed within 20.143642101s
	I0130 22:14:30.687663  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.690292  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690742  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.690780  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.690973  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.691179  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691381  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.691544  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.691751  680506 main.go:141] libmachine: Using SSH client type: native
	I0130 22:14:30.692061  680506 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.84 22 <nil> <nil>}
	I0130 22:14:30.692072  680506 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:14:30.827201  680506 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706652870.759760061
	
	I0130 22:14:30.827227  680506 fix.go:206] guest clock: 1706652870.759760061
	I0130 22:14:30.827237  680506 fix.go:219] Guest: 2024-01-30 22:14:30.759760061 +0000 UTC Remote: 2024-01-30 22:14:30.687640253 +0000 UTC m=+368.205420110 (delta=72.119808ms)
	I0130 22:14:30.827264  680506 fix.go:190] guest clock delta is within tolerance: 72.119808ms
	I0130 22:14:30.827276  680506 start.go:83] releasing machines lock for "old-k8s-version-912992", held for 20.283317012s
	I0130 22:14:30.827301  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.827604  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:30.830260  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830761  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.830797  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.830974  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831570  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831747  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:14:30.831856  680506 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:14:30.831925  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.832004  680506 ssh_runner.go:195] Run: cat /version.json
	I0130 22:14:30.832031  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:14:30.834970  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835316  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835340  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835377  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835539  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.835794  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:30.835798  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.835816  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:30.835964  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:14:30.836028  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836202  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.836228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:14:30.836375  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:14:30.836573  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:14:30.931876  680506 ssh_runner.go:195] Run: systemctl --version
	I0130 22:14:30.959543  680506 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:14:31.114259  680506 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:14:31.122360  680506 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:14:31.122498  680506 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:14:31.142608  680506 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:14:31.142637  680506 start.go:475] detecting cgroup driver to use...
	I0130 22:14:31.142709  680506 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:14:31.159940  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:14:31.177310  680506 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:14:31.177394  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:14:31.197811  680506 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:14:31.215942  680506 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:14:31.341800  680506 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:14:31.476217  680506 docker.go:233] disabling docker service ...
	I0130 22:14:31.476303  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:14:31.493525  680506 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:14:31.505631  680506 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:14:31.630766  680506 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:14:31.744997  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:14:31.760432  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:14:31.778076  680506 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0130 22:14:31.778156  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.788945  680506 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:14:31.789063  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.799691  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.811057  680506 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:14:31.822879  680506 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:14:31.835071  680506 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:14:31.844391  680506 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:14:31.844478  680506 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:14:31.858948  680506 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:14:31.868566  680506 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:14:31.972874  680506 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:14:32.150449  680506 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:14:32.150536  680506 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:14:32.155130  680506 start.go:543] Will wait 60s for crictl version
	I0130 22:14:32.155192  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:32.158927  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:14:32.199472  680506 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:14:32.199568  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.245662  680506 ssh_runner.go:195] Run: crio --version
	I0130 22:14:32.308945  680506 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0130 22:14:32.310311  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetIP
	I0130 22:14:32.313118  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313548  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:14:32.313596  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:14:32.313777  680506 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 22:14:32.317774  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:32.333291  680506 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 22:14:32.333356  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:32.389401  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:32.389494  680506 ssh_runner.go:195] Run: which lz4
	I0130 22:14:32.394618  680506 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:14:32.399870  680506 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:14:32.399907  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0130 22:14:29.354779  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.576966  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.649608  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:29.729908  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:29.730008  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.230637  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:30.730130  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.231149  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:31.730722  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.230159  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:32.258815  681007 api_server.go:72] duration metric: took 2.528908545s to wait for apiserver process to appear ...
	I0130 22:14:32.258850  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:32.258872  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:31.245860  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:33.256817  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:32.753558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.761674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:34.208834  680506 crio.go:444] Took 1.814253 seconds to copy over tarball
	I0130 22:14:34.208929  680506 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:14:37.177389  680506 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.968423546s)
	I0130 22:14:37.177436  680506 crio.go:451] Took 2.968549 seconds to extract the tarball
	I0130 22:14:37.177450  680506 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:14:37.233540  680506 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:14:37.291641  680506 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0130 22:14:37.291680  680506 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0130 22:14:37.291780  680506 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.291799  680506 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.291820  680506 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0130 22:14:37.291828  680506 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.291904  680506 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.291802  680506 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.292022  680506 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.291788  680506 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293663  680506 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0130 22:14:37.293709  680506 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.293740  680506 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.293753  680506 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.293662  680506 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.293800  680506 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.293884  680506 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.492113  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.494903  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.495618  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0130 22:14:37.508190  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:14:37.512582  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.514112  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.259261  681007 api_server.go:269] stopped: https://192.168.50.254:8444/healthz: Get "https://192.168.50.254:8444/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:37.259326  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:37.454899  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:37.454935  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:37.759230  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.420961  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.420997  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.421026  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:38.429934  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:38.429972  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:38.759948  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:35.746244  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.748221  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:37.252371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.752965  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:40.032924  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.032973  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.032996  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.076077  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.076109  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.259372  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.268746  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0130 22:14:40.268785  681007 api_server.go:103] status: https://192.168.50.254:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0130 22:14:40.759307  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:14:40.764886  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:14:40.774834  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:14:40.774863  681007 api_server.go:131] duration metric: took 8.516004362s to wait for apiserver health ...
	I0130 22:14:40.774875  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:14:40.774883  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:40.776748  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:37.573794  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.589122  680506 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0130 22:14:37.589177  680506 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.589222  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.653263  680506 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.661867  680506 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0130 22:14:37.661918  680506 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.661974  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.681759  680506 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0130 22:14:37.681810  680506 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0130 22:14:37.681868  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811285  680506 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0130 22:14:37.811334  680506 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.811398  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811403  680506 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0130 22:14:37.811441  680506 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.811507  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811522  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0130 22:14:37.811592  680506 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0130 22:14:37.811646  680506 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.811684  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0130 22:14:37.811508  680506 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0130 22:14:37.811723  680506 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.811694  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811753  680506 ssh_runner.go:195] Run: which crictl
	I0130 22:14:37.811648  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0130 22:14:37.828948  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0130 22:14:37.887304  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0130 22:14:37.887396  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0130 22:14:37.924180  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0130 22:14:37.934685  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0130 22:14:37.934737  680506 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0130 22:14:37.934948  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0130 22:14:37.951228  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0130 22:14:37.955310  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0130 22:14:37.988234  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0130 22:14:38.007649  680506 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0130 22:14:38.007710  680506 cache_images.go:92] LoadImages completed in 716.017973ms
	W0130 22:14:38.007789  680506 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18014-640473/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0: no such file or directory
	I0130 22:14:38.007920  680506 ssh_runner.go:195] Run: crio config
	I0130 22:14:38.081077  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:38.081112  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:38.081141  680506 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:14:38.081175  680506 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.84 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-912992 NodeName:old-k8s-version-912992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.84"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.84 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0130 22:14:38.082099  680506 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.84
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-912992"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.84
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.84"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-912992
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.39.84:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:14:38.082244  680506 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-912992 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.84
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:14:38.082342  680506 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0130 22:14:38.091606  680506 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:14:38.091676  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:14:38.100424  680506 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0130 22:14:38.117658  680506 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:14:38.134721  680506 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0130 22:14:38.151680  680506 ssh_runner.go:195] Run: grep 192.168.39.84	control-plane.minikube.internal$ /etc/hosts
	I0130 22:14:38.155416  680506 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.84	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:14:38.169111  680506 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992 for IP: 192.168.39.84
	I0130 22:14:38.169145  680506 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:14:38.169305  680506 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:14:38.169342  680506 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:14:38.169412  680506 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.key
	I0130 22:14:38.169506  680506 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key.2e1821a6
	I0130 22:14:38.169547  680506 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key
	I0130 22:14:38.169654  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:14:38.169689  680506 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:14:38.169702  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:14:38.169726  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:14:38.169753  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:14:38.169776  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:14:38.169818  680506 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:14:38.170542  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:14:38.195046  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:14:38.217051  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:14:38.240099  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 22:14:38.266523  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:14:38.289237  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:14:38.313011  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:14:38.336140  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:14:38.359683  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:14:38.382658  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:14:38.407558  680506 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:14:38.435231  680506 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:14:38.453753  680506 ssh_runner.go:195] Run: openssl version
	I0130 22:14:38.459339  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:14:38.469159  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474001  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.474079  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:14:38.479508  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:14:38.489049  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:14:38.498644  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503289  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.503340  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:14:38.508873  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:14:38.518533  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:14:38.527871  680506 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532447  680506 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.532493  680506 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:14:38.538832  680506 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:14:38.549398  680506 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:14:38.553860  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0130 22:14:38.559537  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0130 22:14:38.565050  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0130 22:14:38.570705  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0130 22:14:38.576386  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0130 22:14:38.581918  680506 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0130 22:14:38.587630  680506 kubeadm.go:404] StartCluster: {Name:old-k8s-version-912992 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-912992 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:14:38.587746  680506 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:14:38.587803  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:38.630328  680506 cri.go:89] found id: ""
	I0130 22:14:38.630420  680506 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:14:38.642993  680506 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0130 22:14:38.643026  680506 kubeadm.go:636] restartCluster start
	I0130 22:14:38.643095  680506 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0130 22:14:38.653192  680506 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:38.654325  680506 kubeconfig.go:92] found "old-k8s-version-912992" server: "https://192.168.39.84:8443"
	I0130 22:14:38.656891  680506 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0130 22:14:38.666689  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:38.666762  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:38.678857  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.167457  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.167543  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.179779  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:39.667279  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:39.667371  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:39.679872  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.167509  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.167607  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.181001  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.666977  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:40.667063  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:40.679278  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.167767  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.167850  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.182139  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:41.667595  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:41.667687  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:41.681165  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:42.167790  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.167888  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.180444  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:40.777979  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:40.798593  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:40.826400  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:40.839821  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:14:40.839847  681007 system_pods.go:61] "coredns-5dd5756b68-t65nr" [1379e1d2-263a-4d35-a630-4e197767b62d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0130 22:14:40.839856  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [e8468358-fd44-4f0e-b54b-13e9a478e259] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0130 22:14:40.839868  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [2e35ea0f-78e5-41b4-965a-c428408f84eb] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0130 22:14:40.839877  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [669d8c85-812f-4bfc-b3bb-7f5041ca8514] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0130 22:14:40.839890  681007 system_pods.go:61] "kube-proxy-9v5rw" [e97b697b-472b-4b3d-886b-39786c1b3760] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0130 22:14:40.839905  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [956ec644-071b-4390-b63e-8cbe9ad2a350] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0130 22:14:40.839918  681007 system_pods.go:61] "metrics-server-57f55c9bc5-wlzw4" [3d2bfab3-e9e2-484b-8b8d-779869cbcf9d] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:14:40.839927  681007 system_pods.go:61] "storage-provisioner" [e87ce7ad-4933-41b6-8e20-91a4e9ecc45c] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:14:40.839934  681007 system_pods.go:74] duration metric: took 13.512695ms to wait for pod list to return data ...
	I0130 22:14:40.839942  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:40.843711  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:40.843736  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:40.843747  681007 node_conditions.go:105] duration metric: took 3.799992ms to run NodePressure ...
	I0130 22:14:40.843762  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:41.200590  681007 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205872  681007 kubeadm.go:787] kubelet initialised
	I0130 22:14:41.205892  681007 kubeadm.go:788] duration metric: took 5.278409ms waiting for restarted kubelet to initialise ...
	I0130 22:14:41.205899  681007 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:14:41.214192  681007 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:43.221105  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:39.787175  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.243973  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.244009  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.250982  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:44.751725  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:42.667181  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:42.667264  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:42.679726  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.167750  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.167867  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.179954  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:43.667584  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:43.667715  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:43.680828  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.167107  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.167263  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.183107  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:44.667674  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:44.667749  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:44.680942  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.167589  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.167689  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.180786  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.667715  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:45.667811  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:45.681199  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.167671  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.167764  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.181276  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:46.666810  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:46.666952  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:46.680935  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:47.167612  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.167711  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.180385  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:45.221153  681007 pod_ready.go:102] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.221375  681007 pod_ready.go:92] pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:47.221398  681007 pod_ready.go:81] duration metric: took 6.00718187s waiting for pod "coredns-5dd5756b68-t65nr" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:47.221411  681007 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:46.244096  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:48.245476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:46.755543  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:49.252337  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:47.667527  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:47.667633  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:47.680519  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.167564  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.167659  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.179815  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.667656  680506 api_server.go:166] Checking apiserver status ...
	I0130 22:14:48.667733  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0130 22:14:48.682679  680506 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0130 22:14:48.682711  680506 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0130 22:14:48.682722  680506 kubeadm.go:1135] stopping kube-system containers ...
	I0130 22:14:48.682735  680506 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0130 22:14:48.682788  680506 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:14:48.726311  680506 cri.go:89] found id: ""
	I0130 22:14:48.726399  680506 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0130 22:14:48.744504  680506 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:14:48.755471  680506 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:14:48.755523  680506 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765613  680506 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0130 22:14:48.765636  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:48.886214  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:49.873929  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.090456  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.199471  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:14:50.278504  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:14:50.278604  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:50.779646  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.279488  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.779657  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:14:51.829813  680506 api_server.go:72] duration metric: took 1.551314483s to wait for apiserver process to appear ...
	I0130 22:14:51.829852  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:14:51.829888  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:51.830469  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": dial tcp 192.168.39.84:8443: connect: connection refused
	I0130 22:14:52.330162  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:49.228581  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.230115  681007 pod_ready.go:102] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.228169  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.228193  681007 pod_ready.go:81] duration metric: took 6.006776273s waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.228201  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233723  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.233746  681007 pod_ready.go:81] duration metric: took 5.53858ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.233754  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238962  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.238983  681007 pod_ready.go:81] duration metric: took 5.221325ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.238994  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247623  681007 pod_ready.go:92] pod "kube-proxy-9v5rw" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.247646  681007 pod_ready.go:81] duration metric: took 8.643709ms waiting for pod "kube-proxy-9v5rw" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.247657  681007 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254079  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:14:53.254102  681007 pod_ready.go:81] duration metric: took 6.435694ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:53.254113  681007 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	I0130 22:14:50.745213  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.245163  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:51.252956  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:53.750853  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.331302  680506 api_server.go:269] stopped: https://192.168.39.84:8443/healthz: Get "https://192.168.39.84:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0130 22:14:57.331361  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:55.262286  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.762588  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:55.245641  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.246341  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:58.248157  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.248193  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.248223  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.329248  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W0130 22:14:58.329276  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I0130 22:14:58.330342  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.349249  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.349288  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[-]poststarthook/crd-informer-synced failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:58.830998  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:58.836484  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:58.836510  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.330646  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.337516  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0130 22:14:59.337559  680506 api_server.go:103] status: https://192.168.39.84:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0130 22:14:59.830016  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:14:59.836129  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:14:59.846684  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:14:59.846741  680506 api_server.go:131] duration metric: took 8.016878739s to wait for apiserver health ...
	I0130 22:14:59.846760  680506 cni.go:84] Creating CNI manager for ""
	I0130 22:14:59.846770  680506 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:14:59.848874  680506 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:14:55.751242  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:57.755048  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:00.251809  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.850215  680506 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:14:59.860069  680506 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:14:59.880017  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:14:59.891300  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:14:59.891330  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:14:59.891335  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:14:59.891340  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:14:59.891345  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Pending
	I0130 22:14:59.891349  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:14:59.891352  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:14:59.891360  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:14:59.891368  680506 system_pods.go:74] duration metric: took 11.331282ms to wait for pod list to return data ...
	I0130 22:14:59.891377  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:14:59.895522  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:14:59.895558  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:14:59.895571  680506 node_conditions.go:105] duration metric: took 4.184167ms to run NodePressure ...
	I0130 22:14:59.895591  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0130 22:15:00.214560  680506 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218844  680506 kubeadm.go:787] kubelet initialised
	I0130 22:15:00.218863  680506 kubeadm.go:788] duration metric: took 4.278574ms waiting for restarted kubelet to initialise ...
	I0130 22:15:00.218870  680506 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:00.223310  680506 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.228349  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228371  680506 pod_ready.go:81] duration metric: took 5.033709ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.228380  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.228385  680506 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.236353  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236378  680506 pod_ready.go:81] duration metric: took 7.981988ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.236387  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "etcd-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.236394  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.244477  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244504  680506 pod_ready.go:81] duration metric: took 8.099653ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.244521  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.244531  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.283561  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283590  680506 pod_ready.go:81] duration metric: took 39.047028ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.283602  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.283610  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:00.683495  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683524  680506 pod_ready.go:81] duration metric: took 399.906973ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:00.683537  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-proxy-qm7xx" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:00.683544  680506 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:01.084061  680506 pod_ready.go:97] node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084093  680506 pod_ready.go:81] duration metric: took 400.538074ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	E0130 22:15:01.084107  680506 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-912992" hosting pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:01.084117  680506 pod_ready.go:38] duration metric: took 865.238684ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:01.084149  680506 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:15:01.120344  680506 ops.go:34] apiserver oom_adj: -16
	I0130 22:15:01.120372  680506 kubeadm.go:640] restartCluster took 22.477337631s
	I0130 22:15:01.120384  680506 kubeadm.go:406] StartCluster complete in 22.532762257s
	I0130 22:15:01.120408  680506 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.120536  680506 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:15:01.123018  680506 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:15:01.123321  680506 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:15:01.123514  680506 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:15:01.123624  680506 config.go:182] Loaded profile config "old-k8s-version-912992": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:15:01.123662  680506 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123683  680506 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-912992"
	I0130 22:15:01.123701  680506 addons.go:234] Setting addon metrics-server=true in "old-k8s-version-912992"
	W0130 22:15:01.123709  680506 addons.go:243] addon metrics-server should already be in state true
	I0130 22:15:01.123745  680506 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-912992"
	I0130 22:15:01.123769  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124153  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124178  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.124189  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124218  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.123635  680506 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-912992"
	I0130 22:15:01.124295  680506 addons.go:234] Setting addon storage-provisioner=true in "old-k8s-version-912992"
	W0130 22:15:01.124303  680506 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:15:01.124357  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.124693  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.124741  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.141006  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42609
	I0130 22:15:01.141022  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45069
	I0130 22:15:01.141594  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.141697  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.142122  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142142  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142273  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.142297  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.142793  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.142837  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43359
	I0130 22:15:01.142797  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.143291  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.143380  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.143411  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.143758  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.143786  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.144174  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.144210  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.144212  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.144438  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.148328  680506 addons.go:234] Setting addon default-storageclass=true in "old-k8s-version-912992"
	W0130 22:15:01.148350  680506 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:15:01.148378  680506 host.go:66] Checking if "old-k8s-version-912992" exists ...
	I0130 22:15:01.148706  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.148734  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.163324  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39799
	I0130 22:15:01.163720  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35669
	I0130 22:15:01.164054  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164187  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.164638  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164665  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.164806  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.164817  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.165086  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165242  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.165310  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.165844  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.167686  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.170253  680506 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:15:01.168142  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.169379  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37175
	I0130 22:15:01.172172  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:15:01.172200  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:15:01.172228  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.174608  680506 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:15:01.173335  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.175891  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.176824  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.177101  680506 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.177110  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.177116  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:15:01.177134  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.177137  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.177239  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.177855  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.178037  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.181184  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181626  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.181644  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.181879  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.182032  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.182215  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.182321  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.182343  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.182745  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.182805  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.183262  680506 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:15:01.183296  680506 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:15:01.218510  680506 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39179
	I0130 22:15:01.218955  680506 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:15:01.219566  680506 main.go:141] libmachine: Using API Version  1
	I0130 22:15:01.219598  680506 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:15:01.219976  680506 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:15:01.220136  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetState
	I0130 22:15:01.221882  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .DriverName
	I0130 22:15:01.222143  680506 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.222161  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:15:01.222178  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHHostname
	I0130 22:15:01.225129  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225437  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ae:10:1a", ip: ""} in network mk-old-k8s-version-912992: {Iface:virbr3 ExpiryTime:2024-01-30 23:14:24 +0000 UTC Type:0 Mac:52:54:00:ae:10:1a Iaid: IPaddr:192.168.39.84 Prefix:24 Hostname:old-k8s-version-912992 Clientid:01:52:54:00:ae:10:1a}
	I0130 22:15:01.225454  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | domain old-k8s-version-912992 has defined IP address 192.168.39.84 and MAC address 52:54:00:ae:10:1a in network mk-old-k8s-version-912992
	I0130 22:15:01.225732  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHPort
	I0130 22:15:01.225875  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHKeyPath
	I0130 22:15:01.225948  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .GetSSHUsername
	I0130 22:15:01.226015  680506 sshutil.go:53] new ssh client: &{IP:192.168.39.84 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/old-k8s-version-912992/id_rsa Username:docker}
	I0130 22:15:01.362950  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:15:01.405756  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:15:01.405829  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:15:01.442804  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:15:01.468468  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:15:01.468501  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:15:01.514493  680506 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.514530  680506 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:15:01.531543  680506 start.go:902] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0130 22:15:01.551886  680506 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:15:01.697743  680506 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-912992" context rescaled to 1 replicas
	I0130 22:15:01.697805  680506 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.84 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:15:01.699954  680506 out.go:177] * Verifying Kubernetes components...
	I0130 22:15:01.701746  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078654  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078682  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078704  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078621  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.078736  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.078751  680506 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:02.079190  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079200  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079221  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079229  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079231  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079235  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079245  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079246  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079200  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079257  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.079266  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.079665  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079685  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079695  680506 addons.go:470] Verifying addon metrics-server=true in "old-k8s-version-912992"
	I0130 22:15:02.079699  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.079719  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.079669  680506 main.go:141] libmachine: (old-k8s-version-912992) DBG | Closing plugin on server side
	I0130 22:15:02.081702  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081725  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.081736  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.081746  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.081969  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.081999  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.087366  680506 main.go:141] libmachine: Making call to close driver server
	I0130 22:15:02.087387  680506 main.go:141] libmachine: (old-k8s-version-912992) Calling .Close
	I0130 22:15:02.087642  680506 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:15:02.087661  680506 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:15:02.089698  680506 out.go:177] * Enabled addons: metrics-server, storage-provisioner, default-storageclass
	I0130 22:15:02.091156  680506 addons.go:505] enable addons completed in 967.651598ms: enabled=[metrics-server storage-provisioner default-storageclass]
	I0130 22:14:59.767179  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.262656  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:14:59.743796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:01.745268  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.245639  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:02.754252  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:05.250850  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:04.082265  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:06.582230  680506 node_ready.go:58] node "old-k8s-version-912992" has status "Ready":"False"
	I0130 22:15:04.764379  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.764868  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.765839  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:06.744476  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.744978  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:08.584004  680506 node_ready.go:49] node "old-k8s-version-912992" has status "Ready":"True"
	I0130 22:15:08.584038  680506 node_ready.go:38] duration metric: took 6.50526711s waiting for node "old-k8s-version-912992" to be "Ready" ...
	I0130 22:15:08.584052  680506 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:08.591084  680506 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595709  680506 pod_ready.go:92] pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.595735  680506 pod_ready.go:81] duration metric: took 4.623355ms waiting for pod "coredns-5644d7b6d9-7wr8t" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.595747  680506 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600152  680506 pod_ready.go:92] pod "etcd-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.600175  680506 pod_ready.go:81] duration metric: took 4.419847ms waiting for pod "etcd-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.600186  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604426  680506 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.604444  680506 pod_ready.go:81] duration metric: took 4.249901ms waiting for pod "kube-apiserver-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.604454  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608671  680506 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.608685  680506 pod_ready.go:81] duration metric: took 4.224838ms waiting for pod "kube-controller-manager-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.608694  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984275  680506 pod_ready.go:92] pod "kube-proxy-qm7xx" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:08.984306  680506 pod_ready.go:81] duration metric: took 375.604271ms waiting for pod "kube-proxy-qm7xx" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:08.984321  680506 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384278  680506 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace has status "Ready":"True"
	I0130 22:15:09.384303  680506 pod_ready.go:81] duration metric: took 399.974439ms waiting for pod "kube-scheduler-old-k8s-version-912992" in "kube-system" namespace to be "Ready" ...
	I0130 22:15:09.384316  680506 pod_ready.go:38] duration metric: took 800.249209ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:15:09.384331  680506 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:15:09.384383  680506 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:15:09.399639  680506 api_server.go:72] duration metric: took 7.701783762s to wait for apiserver process to appear ...
	I0130 22:15:09.399665  680506 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:15:09.399683  680506 api_server.go:253] Checking apiserver healthz at https://192.168.39.84:8443/healthz ...
	I0130 22:15:09.406824  680506 api_server.go:279] https://192.168.39.84:8443/healthz returned 200:
	ok
	I0130 22:15:09.407829  680506 api_server.go:141] control plane version: v1.16.0
	I0130 22:15:09.407850  680506 api_server.go:131] duration metric: took 8.177146ms to wait for apiserver health ...
	I0130 22:15:09.407860  680506 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:15:09.584994  680506 system_pods.go:59] 7 kube-system pods found
	I0130 22:15:09.585031  680506 system_pods.go:61] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.585039  680506 system_pods.go:61] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.585046  680506 system_pods.go:61] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.585053  680506 system_pods.go:61] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.585059  680506 system_pods.go:61] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.585065  680506 system_pods.go:61] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.585072  680506 system_pods.go:61] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.585080  680506 system_pods.go:74] duration metric: took 177.213093ms to wait for pod list to return data ...
	I0130 22:15:09.585092  680506 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:15:09.784286  680506 default_sa.go:45] found service account: "default"
	I0130 22:15:09.784313  680506 default_sa.go:55] duration metric: took 199.211541ms for default service account to be created ...
	I0130 22:15:09.784322  680506 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:15:09.987063  680506 system_pods.go:86] 7 kube-system pods found
	I0130 22:15:09.987094  680506 system_pods.go:89] "coredns-5644d7b6d9-7wr8t" [4b6a3982-1256-41e6-9311-1195746df25a] Running
	I0130 22:15:09.987103  680506 system_pods.go:89] "etcd-old-k8s-version-912992" [53de6aad-3229-4f55-9593-874e6e57e856] Running
	I0130 22:15:09.987109  680506 system_pods.go:89] "kube-apiserver-old-k8s-version-912992" [0084eeb0-9487-4c7a-adef-49b98d6b27ba] Running
	I0130 22:15:09.987114  680506 system_pods.go:89] "kube-controller-manager-old-k8s-version-912992" [7a85b508-c064-45f5-bddb-95ad7e401994] Running
	I0130 22:15:09.987120  680506 system_pods.go:89] "kube-proxy-qm7xx" [4a8cca85-87c7-4d02-b5cd-4bb83bd5ef7d] Running
	I0130 22:15:09.987125  680506 system_pods.go:89] "kube-scheduler-old-k8s-version-912992" [8becace9-3e21-434b-b89a-d23fb42dda17] Running
	I0130 22:15:09.987131  680506 system_pods.go:89] "storage-provisioner" [9cb43cc9-7d15-41a9-90b6-66fc99fa67e5] Running
	I0130 22:15:09.987140  680506 system_pods.go:126] duration metric: took 202.811673ms to wait for k8s-apps to be running ...
	I0130 22:15:09.987150  680506 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:15:09.987206  680506 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:15:10.001966  680506 system_svc.go:56] duration metric: took 14.805505ms WaitForService to wait for kubelet.
	I0130 22:15:10.001997  680506 kubeadm.go:581] duration metric: took 8.30415043s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:15:10.002022  680506 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:15:10.184699  680506 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:15:10.184743  680506 node_conditions.go:123] node cpu capacity is 2
	I0130 22:15:10.184756  680506 node_conditions.go:105] duration metric: took 182.728475ms to run NodePressure ...
	I0130 22:15:10.184772  680506 start.go:228] waiting for startup goroutines ...
	I0130 22:15:10.184782  680506 start.go:233] waiting for cluster config update ...
	I0130 22:15:10.184796  680506 start.go:242] writing updated cluster config ...
	I0130 22:15:10.185114  680506 ssh_runner.go:195] Run: rm -f paused
	I0130 22:15:10.239744  680506 start.go:600] kubectl: 1.29.1, cluster: 1.16.0 (minor skew: 13)
	I0130 22:15:10.241916  680506 out.go:177] 
	W0130 22:15:10.243307  680506 out.go:239] ! /usr/local/bin/kubectl is version 1.29.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0130 22:15:10.244540  680506 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0130 22:15:10.245844  680506 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-912992" cluster and "default" namespace by default
	I0130 22:15:07.753442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.250385  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:10.770107  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.262302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:11.244598  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:13.744540  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:12.252794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:14.750293  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:15.761573  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:17.764138  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.245719  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.744763  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:16.751093  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:18.751144  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:19.766344  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:22.262506  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.243857  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.244633  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:21.250405  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:23.752715  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:24.762412  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.260985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:25.744105  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:27.746611  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:26.250066  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:28.250115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.251911  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:29.262020  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:31.763782  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:30.243836  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.244064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.244535  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:32.754073  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:35.249927  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:34.260099  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.262332  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.262515  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:36.245173  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:38.747970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:37.252466  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:39.254833  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:40.264075  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:42.763978  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.244902  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.246545  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:41.750938  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:43.751361  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.262599  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.769508  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:45.743965  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:47.745769  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:46.250381  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:48.250841  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.262796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.763728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:49.746064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:51.750634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.244634  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:50.750564  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:52.751105  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:54.751544  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:55.261060  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:57.262293  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.245111  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:58.246787  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:56.751681  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.250409  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:15:59.762572  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.765901  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:00.744216  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:02.744765  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:01.750473  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.252199  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:04.267246  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.764985  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:05.252271  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:07.745483  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:06.252327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:08.750460  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:09.263071  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.764448  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:10.244124  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:12.245643  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.248183  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:11.254631  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:13.752086  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:14.262534  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.763532  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.744988  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.746562  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:16.251554  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:18.751130  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:19.261302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.262097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.764162  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:21.243403  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.245825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:20.751443  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:23.251248  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:26.261011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.263281  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.744554  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:27.744970  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:25.750244  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:28.249555  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.250246  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:30.761252  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.762070  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:29.745453  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.243772  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.245396  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:32.251218  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:34.752524  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:35.261942  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.264695  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:36.745702  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.244617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:37.250645  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.251192  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:39.762454  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.765643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.244956  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.245892  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:41.750084  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:43.751479  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:44.262004  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.262160  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.763669  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:45.744222  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:47.745591  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:46.249746  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:48.250654  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.252500  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:51.261603  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:53.261672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:50.244099  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.744215  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:52.749766  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.750634  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:55.261803  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:57.262915  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:54.744549  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.745030  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.244809  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:56.751851  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.258417  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:16:59.268254  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.761347  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.761999  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.246996  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.744672  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:01.750976  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:03.751083  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:05.763147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.264472  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.244449  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.244796  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:06.250266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:08.250718  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.761567  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.762159  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.245064  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.744572  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:10.750221  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:12.750688  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.752051  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:15.261414  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.262083  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:14.745621  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.243837  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.244825  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:17.250798  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.251873  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:19.262614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.761873  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.762158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.245432  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:23.745684  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:21.750760  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:24.252401  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:25.762960  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.261732  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.246290  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.744375  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:26.749794  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:28.750363  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:30.262011  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:32.762896  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.243646  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.245351  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:31.251364  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:33.750995  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.262828  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.763644  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.245530  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.246211  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:35.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:37.752489  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.251704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:40.261365  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.261786  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:39.745084  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:41.746617  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.244143  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:42.750921  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:45.251115  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:44.262664  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.764196  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.769165  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:46.744967  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:48.745930  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:47.751743  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:50.250561  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.261754  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.764405  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:51.244859  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:53.744487  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:52.254402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:54.751442  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:56.260885  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.261304  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:55.747588  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:58.244383  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:57.250767  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:17:59.750343  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.262535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.762755  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:00.248648  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:02.744883  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:01.751253  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:03.751595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:04.763841  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.263079  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:05.244262  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:07.244758  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.245079  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:06.252399  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:08.750732  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:09.263723  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.766305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.771997  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:11.744688  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:14.243700  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:10.751691  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:13.254909  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.263146  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.764654  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:16.244291  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:18.250725  680786 pod_ready.go:102] pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:15.751459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:17.752591  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.251354  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:21.263171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.762025  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:20.238489  680786 pod_ready.go:81] duration metric: took 4m0.001085938s waiting for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:20.238561  680786 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-qfj5x" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:20.238585  680786 pod_ready.go:38] duration metric: took 4m13.374837351s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:20.238635  680786 kubeadm.go:640] restartCluster took 4m32.952408079s
	W0130 22:18:20.238771  680786 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:20.238897  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:22.752701  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:23.743814  680821 pod_ready.go:81] duration metric: took 4m0.000772856s waiting for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:23.743843  680821 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-hcg7l" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:23.743867  680821 pod_ready.go:38] duration metric: took 4m8.55197109s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:23.743901  680821 kubeadm.go:640] restartCluster took 4m27.679173945s
	W0130 22:18:23.743979  680821 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:23.744016  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:25.762818  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:27.766206  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:30.262706  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:32.263895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:33.696118  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.457184259s)
	I0130 22:18:33.696246  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:33.709756  680786 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:33.719095  680786 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:33.727249  680786 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:33.727304  680786 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:33.783803  680786 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0130 22:18:33.783934  680786 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:33.947330  680786 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:33.947473  680786 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:33.947594  680786 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:34.185129  680786 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:34.186847  680786 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:34.186958  680786 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:34.187047  680786 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:34.187130  680786 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:34.187254  680786 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:34.187590  680786 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:34.188233  680786 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:34.188591  680786 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:34.189435  680786 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:34.189737  680786 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:34.190284  680786 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:34.190677  680786 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:34.190788  680786 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:34.357057  680786 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:34.468135  680786 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0130 22:18:34.785137  680786 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:34.900902  680786 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:34.973785  680786 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:34.974693  680786 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:34.977481  680786 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:37.518038  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (13.773993992s)
	I0130 22:18:37.518130  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:37.533148  680821 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:18:37.542965  680821 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:18:37.552859  680821 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:18:37.552915  680821 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:18:37.614837  680821 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:18:37.614964  680821 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:18:37.783252  680821 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:18:37.783431  680821 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:18:37.783598  680821 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:18:38.009789  680821 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:18:38.011805  680821 out.go:204]   - Generating certificates and keys ...
	I0130 22:18:38.011921  680821 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:18:38.012010  680821 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:18:38.012140  680821 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:18:38.012573  680821 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:18:38.013135  680821 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:18:38.014103  680821 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:18:38.015459  680821 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:18:38.016522  680821 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:18:38.017879  680821 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:18:38.018669  680821 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:18:38.019318  680821 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:18:38.019416  680821 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:18:38.190496  680821 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:18:38.487122  680821 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:18:38.567485  680821 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:18:38.764572  680821 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:18:38.765081  680821 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:18:38.771540  680821 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:18:34.761686  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:36.763512  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:38.772838  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:34.979275  680786 out.go:204]   - Booting up control plane ...
	I0130 22:18:34.979394  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:34.979502  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:34.979687  680786 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:35.000161  680786 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:35.001100  680786 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:35.001180  680786 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:35.143762  680786 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:38.773177  680821 out.go:204]   - Booting up control plane ...
	I0130 22:18:38.773326  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:18:38.773447  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:18:38.774160  680821 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:18:38.793263  680821 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:18:38.793414  680821 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:18:38.793489  680821 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:18:38.942605  680821 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:18:41.263027  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.264305  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:43.147099  680786 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003222 seconds
	I0130 22:18:43.165914  680786 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:43.183810  680786 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:43.729066  680786 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:43.729309  680786 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-023824 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:44.247224  680786 kubeadm.go:322] [bootstrap-token] Using token: 8v59zo.bsn08ubvfg01lew3
	I0130 22:18:44.248930  680786 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:44.249075  680786 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:44.256127  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:44.265628  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:44.269906  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:44.278100  680786 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:44.283097  680786 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:44.301902  680786 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:44.542713  680786 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:44.665337  680786 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:44.665367  680786 kubeadm.go:322] 
	I0130 22:18:44.665448  680786 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:44.665463  680786 kubeadm.go:322] 
	I0130 22:18:44.665573  680786 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:44.665583  680786 kubeadm.go:322] 
	I0130 22:18:44.665660  680786 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:44.665761  680786 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:44.665830  680786 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:44.665840  680786 kubeadm.go:322] 
	I0130 22:18:44.665909  680786 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:44.665927  680786 kubeadm.go:322] 
	I0130 22:18:44.665994  680786 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:44.666003  680786 kubeadm.go:322] 
	I0130 22:18:44.666084  680786 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:44.666220  680786 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:44.666324  680786 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:44.666349  680786 kubeadm.go:322] 
	I0130 22:18:44.666456  680786 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:44.666544  680786 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:44.666551  680786 kubeadm.go:322] 
	I0130 22:18:44.666646  680786 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.666764  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:44.666789  680786 kubeadm.go:322] 	--control-plane 
	I0130 22:18:44.666795  680786 kubeadm.go:322] 
	I0130 22:18:44.666898  680786 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:44.666906  680786 kubeadm.go:322] 
	I0130 22:18:44.667000  680786 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 8v59zo.bsn08ubvfg01lew3 \
	I0130 22:18:44.667121  680786 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:44.667741  680786 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:44.667773  680786 cni.go:84] Creating CNI manager for ""
	I0130 22:18:44.667784  680786 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:44.669613  680786 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:47.444081  680821 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.502071 seconds
	I0130 22:18:47.444241  680821 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:18:47.470140  680821 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:18:48.014141  680821 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:18:48.014385  680821 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-713938 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:18:48.528168  680821 kubeadm.go:322] [bootstrap-token] Using token: 5j3t7l.lolt26xy60ozf3ca
	I0130 22:18:45.765205  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.261716  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:48.529669  680821 out.go:204]   - Configuring RBAC rules ...
	I0130 22:18:48.529807  680821 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:18:48.544442  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:18:48.552536  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:18:48.555846  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:18:48.559711  680821 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:18:48.563810  680821 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:18:48.580095  680821 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:18:48.820236  680821 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:18:48.950911  680821 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:18:48.951833  680821 kubeadm.go:322] 
	I0130 22:18:48.951927  680821 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:18:48.951958  680821 kubeadm.go:322] 
	I0130 22:18:48.952042  680821 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:18:48.952063  680821 kubeadm.go:322] 
	I0130 22:18:48.952089  680821 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:18:48.952144  680821 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:18:48.952190  680821 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:18:48.952196  680821 kubeadm.go:322] 
	I0130 22:18:48.952267  680821 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:18:48.952287  680821 kubeadm.go:322] 
	I0130 22:18:48.952346  680821 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:18:48.952356  680821 kubeadm.go:322] 
	I0130 22:18:48.952439  680821 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:18:48.952554  680821 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:18:48.952661  680821 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:18:48.952671  680821 kubeadm.go:322] 
	I0130 22:18:48.952805  680821 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:18:48.952894  680821 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:18:48.952906  680821 kubeadm.go:322] 
	I0130 22:18:48.953001  680821 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953139  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:18:48.953177  680821 kubeadm.go:322] 	--control-plane 
	I0130 22:18:48.953189  680821 kubeadm.go:322] 
	I0130 22:18:48.953296  680821 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:18:48.953306  680821 kubeadm.go:322] 
	I0130 22:18:48.953413  680821 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5j3t7l.lolt26xy60ozf3ca \
	I0130 22:18:48.953555  680821 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:18:48.954606  680821 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:18:48.954659  680821 cni.go:84] Creating CNI manager for ""
	I0130 22:18:48.954677  680821 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:18:48.956379  680821 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:18:44.671035  680786 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:44.696043  680786 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:44.785738  680786 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:44.785867  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.785894  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=no-preload-023824 minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:44.887327  680786 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:45.135926  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:45.636755  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.136406  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:46.636077  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.136080  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:47.636924  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.136830  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.636945  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.136038  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:48.957922  680821 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:18:48.974487  680821 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:18:49.035551  680821 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.035666  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=embed-certs-713938 minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.085285  680821 ops.go:34] apiserver oom_adj: -16
	I0130 22:18:49.366490  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:49.866648  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.366789  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.761888  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:52.765352  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:18:53.254549  681007 pod_ready.go:81] duration metric: took 4m0.000414494s waiting for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" ...
	E0130 22:18:53.254593  681007 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-wlzw4" in "kube-system" namespace to be "Ready" (will not retry!)
	I0130 22:18:53.254623  681007 pod_ready.go:38] duration metric: took 4m12.048715105s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:53.254662  681007 kubeadm.go:640] restartCluster took 4m34.780590329s
	W0130 22:18:53.254758  681007 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0130 22:18:53.254793  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0130 22:18:49.635946  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.136681  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.636090  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.136427  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.636232  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.136032  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.636639  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.136839  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.636957  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.136140  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:50.866857  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.367211  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:51.867291  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.366659  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:52.867351  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.366925  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:53.867180  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.366846  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.866651  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.366588  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:54.636246  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.136047  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:55.636970  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.136258  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.636239  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.136269  680786 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.262159  680786 kubeadm.go:1088] duration metric: took 12.476361074s to wait for elevateKubeSystemPrivileges.
	I0130 22:18:57.262235  680786 kubeadm.go:406] StartCluster complete in 5m10.025020914s
	I0130 22:18:57.262288  680786 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.262417  680786 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:18:57.265204  680786 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:18:57.265504  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:18:57.265655  680786 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:18:57.265746  680786 addons.go:69] Setting storage-provisioner=true in profile "no-preload-023824"
	I0130 22:18:57.265769  680786 addons.go:234] Setting addon storage-provisioner=true in "no-preload-023824"
	W0130 22:18:57.265784  680786 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:18:57.265774  680786 addons.go:69] Setting default-storageclass=true in profile "no-preload-023824"
	I0130 22:18:57.265812  680786 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-023824"
	I0130 22:18:57.265838  680786 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:18:57.265817  680786 addons.go:69] Setting metrics-server=true in profile "no-preload-023824"
	I0130 22:18:57.265880  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.265898  680786 addons.go:234] Setting addon metrics-server=true in "no-preload-023824"
	W0130 22:18:57.265925  680786 addons.go:243] addon metrics-server should already be in state true
	I0130 22:18:57.265973  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266315  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266349  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266376  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.266282  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.266416  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.286273  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33423
	I0130 22:18:57.286366  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43625
	I0130 22:18:57.286463  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35201
	I0130 22:18:57.287691  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287692  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.287851  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.288302  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288323  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288428  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288439  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288511  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.288524  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.288850  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.288897  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289215  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.289405  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289437  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289685  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.289719  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.289792  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.293877  680786 addons.go:234] Setting addon default-storageclass=true in "no-preload-023824"
	W0130 22:18:57.293899  680786 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:18:57.293928  680786 host.go:66] Checking if "no-preload-023824" exists ...
	I0130 22:18:57.294325  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.294356  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.310259  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41009
	I0130 22:18:57.310765  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.311270  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.311289  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.311818  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.312317  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.313547  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37435
	I0130 22:18:57.314105  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.314665  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.314686  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.314752  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.316570  680786 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:18:57.315368  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.317812  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:18:57.317835  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:18:57.317858  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.318173  680786 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:18:57.318194  680786 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:18:57.321603  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.321671  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36421
	I0130 22:18:57.321961  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.322001  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.322280  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.322296  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.322491  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	W0130 22:18:57.322819  680786 kapi.go:245] failed rescaling "coredns" deployment in "kube-system" namespace and "no-preload-023824" context to 1 replicas: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	E0130 22:18:57.322843  680786 start.go:219] Unable to scale down deployment "coredns" in namespace "kube-system" to 1 replica: non-retryable failure while rescaling coredns deployment: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:18:57.322866  680786 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.232 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:18:57.324267  680786 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:57.323003  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.323084  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.325567  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.325663  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:18:57.325909  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.326903  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.327113  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.329169  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.331160  680786 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:18:57.332481  680786 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.332500  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:18:57.332519  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.336038  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336525  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.336546  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.336746  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.336901  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.337031  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.337256  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.338027  680786 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36639
	I0130 22:18:57.338387  680786 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:18:57.339078  680786 main.go:141] libmachine: Using API Version  1
	I0130 22:18:57.339097  680786 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:18:57.339406  680786 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:18:57.339628  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetState
	I0130 22:18:57.341385  680786 main.go:141] libmachine: (no-preload-023824) Calling .DriverName
	I0130 22:18:57.341687  680786 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.341705  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:18:57.341725  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHHostname
	I0130 22:18:57.344745  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345159  680786 main.go:141] libmachine: (no-preload-023824) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d1:23:54", ip: ""} in network mk-no-preload-023824: {Iface:virbr1 ExpiryTime:2024-01-30 23:13:19 +0000 UTC Type:0 Mac:52:54:00:d1:23:54 Iaid: IPaddr:192.168.61.232 Prefix:24 Hostname:no-preload-023824 Clientid:01:52:54:00:d1:23:54}
	I0130 22:18:57.345180  680786 main.go:141] libmachine: (no-preload-023824) DBG | domain no-preload-023824 has defined IP address 192.168.61.232 and MAC address 52:54:00:d1:23:54 in network mk-no-preload-023824
	I0130 22:18:57.345408  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHPort
	I0130 22:18:57.345613  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHKeyPath
	I0130 22:18:57.349708  680786 main.go:141] libmachine: (no-preload-023824) Calling .GetSSHUsername
	I0130 22:18:57.349906  680786 sshutil.go:53] new ssh client: &{IP:192.168.61.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/no-preload-023824/id_rsa Username:docker}
	I0130 22:18:57.525974  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:18:57.531582  680786 node_ready.go:35] waiting up to 6m0s for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.532157  680786 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:18:57.546542  680786 node_ready.go:49] node "no-preload-023824" has status "Ready":"True"
	I0130 22:18:57.546575  680786 node_ready.go:38] duration metric: took 14.926402ms waiting for node "no-preload-023824" to be "Ready" ...
	I0130 22:18:57.546592  680786 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:18:57.573983  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:18:57.589817  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:18:57.589854  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:18:57.684894  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:18:57.684926  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:18:57.715247  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:18:57.726490  680786 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:57.726521  680786 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:18:57.824368  680786 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:18:58.842258  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.316238822s)
	I0130 22:18:58.842310  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842327  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842341  680786 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.310137299s)
	I0130 22:18:58.842386  680786 start.go:929] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I0130 22:18:58.842447  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.127164198s)
	I0130 22:18:58.842474  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842486  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842830  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842870  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842893  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.842898  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842900  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.842921  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842924  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.842931  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.842937  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.842948  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.843222  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843243  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.843456  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.843469  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:58.885944  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:58.885978  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:58.886311  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:58.888268  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:58.888288  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228029  680786 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.403587938s)
	I0130 22:18:59.228205  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228233  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.228672  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.228714  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.228738  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.228749  680786 main.go:141] libmachine: Making call to close driver server
	I0130 22:18:59.228762  680786 main.go:141] libmachine: (no-preload-023824) Calling .Close
	I0130 22:18:59.229119  680786 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:18:59.229182  680786 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:18:59.229197  680786 addons.go:470] Verifying addon metrics-server=true in "no-preload-023824"
	I0130 22:18:59.229126  680786 main.go:141] libmachine: (no-preload-023824) DBG | Closing plugin on server side
	I0130 22:18:59.230815  680786 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:18:59.232158  680786 addons.go:505] enable addons completed in 1.966513856s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:18:55.867390  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.367181  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:56.866689  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.366578  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:57.867406  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.366702  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:58.867537  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.366860  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:18:59.867263  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.366507  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.866976  680821 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:00.994251  680821 kubeadm.go:1088] duration metric: took 11.958653294s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:00.994309  680821 kubeadm.go:406] StartCluster complete in 5m4.981146882s
	I0130 22:19:00.994337  680821 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.994437  680821 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:00.997310  680821 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:00.997649  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:00.997866  680821 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:00.997819  680821 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:00.997932  680821 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-713938"
	I0130 22:19:00.997951  680821 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-713938"
	W0130 22:19:00.997962  680821 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:00.997978  680821 addons.go:69] Setting metrics-server=true in profile "embed-certs-713938"
	I0130 22:19:00.997979  680821 addons.go:69] Setting default-storageclass=true in profile "embed-certs-713938"
	I0130 22:19:00.997994  680821 addons.go:234] Setting addon metrics-server=true in "embed-certs-713938"
	W0130 22:19:00.998002  680821 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:00.998009  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998012  680821 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-713938"
	I0130 22:19:00.998035  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998398  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998425  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998450  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:00.998430  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.018726  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38259
	I0130 22:19:01.018744  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34705
	I0130 22:19:01.018754  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35995
	I0130 22:19:01.019224  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019255  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019329  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.019860  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.019890  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020012  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.019991  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.020062  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.020311  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020379  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.020530  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.020984  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.021001  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021030  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.021533  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.021581  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.024902  680821 addons.go:234] Setting addon default-storageclass=true in "embed-certs-713938"
	W0130 22:19:01.024926  680821 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:01.024955  680821 host.go:66] Checking if "embed-certs-713938" exists ...
	I0130 22:19:01.025333  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.025372  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.041760  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33051
	I0130 22:19:01.043510  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37777
	I0130 22:19:01.043937  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.043980  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.044434  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044454  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.044864  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.044902  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.045102  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045331  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.045686  680821 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:01.045730  680821 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:01.045952  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.049065  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37441
	I0130 22:19:01.049076  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.051101  680821 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:01.049716  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.052918  680821 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.052937  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:01.052959  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.055109  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.055135  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.057586  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.057591  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057611  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.057625  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.057656  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.057829  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.057831  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.057974  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.058123  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.063470  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.065048  680821 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:01.066385  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:01.066404  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:01.066425  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.066427  680821 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39959
	I0130 22:19:01.067271  680821 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:01.067806  680821 main.go:141] libmachine: Using API Version  1
	I0130 22:19:01.067834  680821 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:01.068198  680821 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:01.068403  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetState
	I0130 22:19:01.069684  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070069  680821 main.go:141] libmachine: (embed-certs-713938) Calling .DriverName
	I0130 22:19:01.070133  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.070162  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.070347  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.070369  680821 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.070381  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:01.070402  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHHostname
	I0130 22:19:01.073308  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073914  680821 main.go:141] libmachine: (embed-certs-713938) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:79:c8:41", ip: ""} in network mk-embed-certs-713938: {Iface:virbr4 ExpiryTime:2024-01-30 23:13:41 +0000 UTC Type:0 Mac:52:54:00:79:c8:41 Iaid: IPaddr:192.168.72.213 Prefix:24 Hostname:embed-certs-713938 Clientid:01:52:54:00:79:c8:41}
	I0130 22:19:01.073945  680821 main.go:141] libmachine: (embed-certs-713938) DBG | domain embed-certs-713938 has defined IP address 192.168.72.213 and MAC address 52:54:00:79:c8:41 in network mk-embed-certs-713938
	I0130 22:19:01.073978  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074155  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074207  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHPort
	I0130 22:19:01.074325  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.074346  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHKeyPath
	I0130 22:19:01.074441  680821 main.go:141] libmachine: (embed-certs-713938) Calling .GetSSHUsername
	I0130 22:19:01.074534  680821 sshutil.go:53] new ssh client: &{IP:192.168.72.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/embed-certs-713938/id_rsa Username:docker}
	I0130 22:19:01.210631  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:01.237088  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:01.307032  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:01.307130  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:01.368366  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:01.368405  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:01.388184  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:01.443355  680821 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.443414  680821 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:01.558399  680821 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:01.610498  680821 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-713938" context rescaled to 1 replicas
	I0130 22:19:01.610545  680821 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:01.612750  680821 out.go:177] * Verifying Kubernetes components...
	I0130 22:18:59.584739  680786 pod_ready.go:102] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:01.089751  680786 pod_ready.go:92] pod "coredns-76f75df574-rktrb" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.089826  680786 pod_ready.go:81] duration metric: took 3.515759187s waiting for pod "coredns-76f75df574-rktrb" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.089853  680786 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098560  680786 pod_ready.go:92] pod "coredns-76f75df574-znj8f" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.098645  680786 pod_ready.go:81] duration metric: took 8.774285ms waiting for pod "coredns-76f75df574-znj8f" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.098671  680786 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.106943  680786 pod_ready.go:92] pod "etcd-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.107036  680786 pod_ready.go:81] duration metric: took 8.345837ms waiting for pod "etcd-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.107062  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120384  680786 pod_ready.go:92] pod "kube-apiserver-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.120413  680786 pod_ready.go:81] duration metric: took 13.332445ms waiting for pod "kube-apiserver-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.120427  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129739  680786 pod_ready.go:92] pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:01.129825  680786 pod_ready.go:81] duration metric: took 9.387442ms waiting for pod "kube-controller-manager-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:01.129850  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282077  680786 pod_ready.go:92] pod "kube-proxy-8rn6v" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.282110  680786 pod_ready.go:81] duration metric: took 1.152243055s waiting for pod "kube-proxy-8rn6v" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.282123  680786 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681191  680786 pod_ready.go:92] pod "kube-scheduler-no-preload-023824" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:02.681221  680786 pod_ready.go:81] duration metric: took 399.089453ms waiting for pod "kube-scheduler-no-preload-023824" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:02.681232  680786 pod_ready.go:38] duration metric: took 5.134627161s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:02.681249  680786 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:19:02.681313  680786 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:19:02.695239  680786 api_server.go:72] duration metric: took 5.372338357s to wait for apiserver process to appear ...
	I0130 22:19:02.695265  680786 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:19:02.695291  680786 api_server.go:253] Checking apiserver healthz at https://192.168.61.232:8443/healthz ...
	I0130 22:19:02.700070  680786 api_server.go:279] https://192.168.61.232:8443/healthz returned 200:
	ok
	I0130 22:19:02.701235  680786 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:19:02.701266  680786 api_server.go:131] duration metric: took 5.988974ms to wait for apiserver health ...
	I0130 22:19:02.701279  680786 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:19:02.899520  680786 system_pods.go:59] 9 kube-system pods found
	I0130 22:19:02.899558  680786 system_pods.go:61] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:02.899565  680786 system_pods.go:61] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:02.899572  680786 system_pods.go:61] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:02.899579  680786 system_pods.go:61] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:02.899586  680786 system_pods.go:61] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:02.899592  680786 system_pods.go:61] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:02.899599  680786 system_pods.go:61] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:02.899610  680786 system_pods.go:61] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:02.899626  680786 system_pods.go:61] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:02.899637  680786 system_pods.go:74] duration metric: took 198.349705ms to wait for pod list to return data ...
	I0130 22:19:02.899649  680786 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:19:03.080624  680786 default_sa.go:45] found service account: "default"
	I0130 22:19:03.080668  680786 default_sa.go:55] duration metric: took 181.003649ms for default service account to be created ...
	I0130 22:19:03.080681  680786 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:19:03.285004  680786 system_pods.go:86] 9 kube-system pods found
	I0130 22:19:03.285040  680786 system_pods.go:89] "coredns-76f75df574-rktrb" [e5470bf8-982d-4707-8cd8-c0c0228219fa] Running
	I0130 22:19:03.285048  680786 system_pods.go:89] "coredns-76f75df574-znj8f" [985cd51e-1832-487e-af5b-6a29108fc494] Running
	I0130 22:19:03.285056  680786 system_pods.go:89] "etcd-no-preload-023824" [69ddc249-4d9e-4409-9919-232ad4db11dd] Running
	I0130 22:19:03.285063  680786 system_pods.go:89] "kube-apiserver-no-preload-023824" [013500de-2a63-4981-84ad-8370fde42e39] Running
	I0130 22:19:03.285069  680786 system_pods.go:89] "kube-controller-manager-no-preload-023824" [21374951-44a8-4054-aa8e-7fd1401d9069] Running
	I0130 22:19:03.285073  680786 system_pods.go:89] "kube-proxy-8rn6v" [97ee699b-fd5f-4a47-b858-5b202d1e9384] Running
	I0130 22:19:03.285078  680786 system_pods.go:89] "kube-scheduler-no-preload-023824" [21f68f57-2ce0-4830-b041-8183d416a03d] Running
	I0130 22:19:03.285089  680786 system_pods.go:89] "metrics-server-57f55c9bc5-nvplb" [04303a01-14e7-441d-876c-25425491cae6] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:19:03.285097  680786 system_pods.go:89] "storage-provisioner" [e9fb2b13-124f-427c-875c-ee1ea1178907] Running
	I0130 22:19:03.285107  680786 system_pods.go:126] duration metric: took 204.418927ms to wait for k8s-apps to be running ...
	I0130 22:19:03.285117  680786 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:19:03.285172  680786 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.303077  680786 system_svc.go:56] duration metric: took 17.949308ms WaitForService to wait for kubelet.
	I0130 22:19:03.303108  680786 kubeadm.go:581] duration metric: took 5.980212644s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:19:03.303133  680786 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:19:03.481755  680786 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:19:03.481794  680786 node_conditions.go:123] node cpu capacity is 2
	I0130 22:19:03.481804  680786 node_conditions.go:105] duration metric: took 178.666283ms to run NodePressure ...
	I0130 22:19:03.481816  680786 start.go:228] waiting for startup goroutines ...
	I0130 22:19:03.481822  680786 start.go:233] waiting for cluster config update ...
	I0130 22:19:03.481860  680786 start.go:242] writing updated cluster config ...
	I0130 22:19:03.482145  680786 ssh_runner.go:195] Run: rm -f paused
	I0130 22:19:03.549733  680786 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 22:19:03.551653  680786 out.go:177] * Done! kubectl is now configured to use "no-preload-023824" cluster and "default" namespace by default
	I0130 22:19:01.614025  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:03.810450  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.573311695s)
	I0130 22:19:03.810519  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810531  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810592  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599920536s)
	I0130 22:19:03.810625  680821 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.422412443s)
	I0130 22:19:03.810639  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.810653  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.810640  680821 start.go:929] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:03.811010  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811010  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811035  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811034  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811038  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811045  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811055  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811056  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811065  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.811074  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.811299  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811317  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.811626  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.811677  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.811686  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838002  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.838036  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.838339  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.838364  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.838384  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842042  680821 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (2.227988129s)
	I0130 22:19:03.842085  680821 node_ready.go:35] waiting up to 6m0s for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.842321  680821 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.283887868s)
	I0130 22:19:03.842355  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842369  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.842728  680821 main.go:141] libmachine: (embed-certs-713938) DBG | Closing plugin on server side
	I0130 22:19:03.842753  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.842761  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.842772  680821 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:03.842784  680821 main.go:141] libmachine: (embed-certs-713938) Calling .Close
	I0130 22:19:03.843015  680821 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:03.843031  680821 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:03.843042  680821 addons.go:470] Verifying addon metrics-server=true in "embed-certs-713938"
	I0130 22:19:03.844872  680821 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:03.846361  680821 addons.go:505] enable addons completed in 2.848549166s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:03.857259  680821 node_ready.go:49] node "embed-certs-713938" has status "Ready":"True"
	I0130 22:19:03.857281  680821 node_ready.go:38] duration metric: took 15.183316ms waiting for node "embed-certs-713938" to be "Ready" ...
	I0130 22:19:03.857290  680821 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:03.880136  680821 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392506  680821 pod_ready.go:92] pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.392542  680821 pod_ready.go:81] duration metric: took 1.512370879s waiting for pod "coredns-5dd5756b68-l6hkm" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.392556  680821 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402272  680821 pod_ready.go:92] pod "etcd-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.402382  680821 pod_ready.go:81] duration metric: took 9.816254ms waiting for pod "etcd-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.402410  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414813  680821 pod_ready.go:92] pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.414844  680821 pod_ready.go:81] duration metric: took 12.42049ms waiting for pod "kube-apiserver-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.414861  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424628  680821 pod_ready.go:92] pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.424651  680821 pod_ready.go:81] duration metric: took 9.782ms waiting for pod "kube-controller-manager-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.424660  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445652  680821 pod_ready.go:92] pod "kube-proxy-f7mgv" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.445679  680821 pod_ready.go:81] duration metric: took 21.012459ms waiting for pod "kube-proxy-f7mgv" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.445692  680821 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.459758  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (14.204942723s)
	I0130 22:19:07.459833  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:07.475749  681007 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:19:07.487056  681007 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:19:07.498268  681007 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:19:07.498316  681007 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:19:07.552393  681007 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:19:07.552482  681007 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:19:07.703415  681007 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:19:07.703558  681007 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:19:07.703688  681007 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:19:07.929127  681007 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:19:07.931129  681007 out.go:204]   - Generating certificates and keys ...
	I0130 22:19:07.931256  681007 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:19:07.931340  681007 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:19:07.931443  681007 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0130 22:19:07.931568  681007 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0130 22:19:07.931907  681007 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0130 22:19:07.933061  681007 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0130 22:19:07.934226  681007 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0130 22:19:07.935564  681007 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0130 22:19:07.936846  681007 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0130 22:19:07.938253  681007 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0130 22:19:07.939205  681007 kubeadm.go:322] [certs] Using the existing "sa" key
	I0130 22:19:07.939281  681007 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:19:08.017218  681007 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:19:08.179939  681007 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:19:08.390089  681007 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:19:08.500690  681007 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:19:08.501201  681007 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:19:08.506551  681007 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:19:08.508442  681007 out.go:204]   - Booting up control plane ...
	I0130 22:19:08.508554  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:19:08.508643  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:19:08.509176  681007 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:19:08.528978  681007 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:19:08.529909  681007 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:19:08.530016  681007 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:19:08.657813  681007 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:19:05.846282  680821 pod_ready.go:92] pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:05.846316  680821 pod_ready.go:81] duration metric: took 400.615309ms waiting for pod "kube-scheduler-embed-certs-713938" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:05.846329  680821 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:07.854210  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:10.354894  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:12.358737  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:14.361808  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:16.661056  681007 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.003483 seconds
	I0130 22:19:16.663313  681007 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:19:16.682919  681007 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0130 22:19:17.218185  681007 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0130 22:19:17.218446  681007 kubeadm.go:322] [mark-control-plane] Marking the node default-k8s-diff-port-850803 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0130 22:19:17.733745  681007 kubeadm.go:322] [bootstrap-token] Using token: oi6eg1.osding0t7oyyeu0p
	I0130 22:19:17.735211  681007 out.go:204]   - Configuring RBAC rules ...
	I0130 22:19:17.735388  681007 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0130 22:19:17.744899  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0130 22:19:17.754341  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0130 22:19:17.758107  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0130 22:19:17.761508  681007 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0130 22:19:17.765503  681007 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0130 22:19:17.781414  681007 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0130 22:19:18.095502  681007 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0130 22:19:18.190245  681007 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0130 22:19:18.190272  681007 kubeadm.go:322] 
	I0130 22:19:18.190348  681007 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0130 22:19:18.190360  681007 kubeadm.go:322] 
	I0130 22:19:18.190452  681007 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0130 22:19:18.190461  681007 kubeadm.go:322] 
	I0130 22:19:18.190493  681007 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0130 22:19:18.190604  681007 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0130 22:19:18.190702  681007 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0130 22:19:18.190716  681007 kubeadm.go:322] 
	I0130 22:19:18.190800  681007 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0130 22:19:18.190835  681007 kubeadm.go:322] 
	I0130 22:19:18.190892  681007 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0130 22:19:18.190906  681007 kubeadm.go:322] 
	I0130 22:19:18.190976  681007 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0130 22:19:18.191074  681007 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0130 22:19:18.191178  681007 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0130 22:19:18.191191  681007 kubeadm.go:322] 
	I0130 22:19:18.191293  681007 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0130 22:19:18.191416  681007 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0130 22:19:18.191438  681007 kubeadm.go:322] 
	I0130 22:19:18.191544  681007 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.191672  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 \
	I0130 22:19:18.191703  681007 kubeadm.go:322] 	--control-plane 
	I0130 22:19:18.191714  681007 kubeadm.go:322] 
	I0130 22:19:18.191814  681007 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0130 22:19:18.191824  681007 kubeadm.go:322] 
	I0130 22:19:18.191936  681007 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8444 --token oi6eg1.osding0t7oyyeu0p \
	I0130 22:19:18.192085  681007 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:3fa539707534729a40a7643c8e0a51a6d2c7221a8df3de6d47e122b5d537c154 
	I0130 22:19:18.192660  681007 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0130 22:19:18.192684  681007 cni.go:84] Creating CNI manager for ""
	I0130 22:19:18.192692  681007 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:19:18.194376  681007 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0130 22:19:18.195608  681007 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:19:18.244311  681007 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:19:18.285107  681007 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:19:18.285193  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.285210  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=default-k8s-diff-port-850803 minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:18.682930  681007 ops.go:34] apiserver oom_adj: -16
	I0130 22:19:18.683119  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:16.854674  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:18.854723  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:19.184109  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:19.683715  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.183529  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.684197  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.184124  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:21.684022  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.184033  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:22.683812  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.184203  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:23.683513  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:20.857387  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:23.354163  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:25.354683  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:24.184064  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:24.683177  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.183896  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:25.683522  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.183779  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:26.683891  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.183468  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.683878  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.183471  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:28.683793  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:27.853744  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:30.356959  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:29.183658  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:29.683264  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.183311  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:30.683828  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.183841  681007 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:19:31.287952  681007 kubeadm.go:1088] duration metric: took 13.002835585s to wait for elevateKubeSystemPrivileges.
	I0130 22:19:31.287988  681007 kubeadm.go:406] StartCluster complete in 5m12.874624935s
	I0130 22:19:31.288014  681007 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.288132  681007 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:19:31.290435  681007 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:19:31.290772  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:19:31.290924  681007 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:19:31.291004  681007 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291027  681007 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291024  681007 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-850803"
	W0130 22:19:31.291035  681007 addons.go:243] addon storage-provisioner should already be in state true
	I0130 22:19:31.291044  681007 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:19:31.291048  681007 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-850803"
	I0130 22:19:31.291053  681007 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-850803"
	I0130 22:19:31.291078  681007 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:31.291084  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	W0130 22:19:31.291089  681007 addons.go:243] addon metrics-server should already be in state true
	I0130 22:19:31.291142  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.291497  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291528  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291543  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.291577  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.291578  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.308624  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33823
	I0130 22:19:31.308641  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40995
	I0130 22:19:31.308628  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44049
	I0130 22:19:31.309140  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309143  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309231  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.309662  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309683  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309807  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309825  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.309829  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.309837  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.310304  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310324  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310621  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.310841  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.310944  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.310983  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.311193  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.311237  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.314600  681007 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-850803"
	W0130 22:19:31.314619  681007 addons.go:243] addon default-storageclass should already be in state true
	I0130 22:19:31.314641  681007 host.go:66] Checking if "default-k8s-diff-port-850803" exists ...
	I0130 22:19:31.314888  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.314923  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.331266  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43759
	I0130 22:19:31.331358  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38109
	I0130 22:19:31.332259  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332277  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.332769  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332791  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.332930  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.332949  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.333243  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333307  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.333459  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.333534  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.335458  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.337520  681007 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:19:31.335819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.338601  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0130 22:19:31.338925  681007 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.338944  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:19:31.338969  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.340850  681007 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0130 22:19:31.339883  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.341794  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342280  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.342314  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.342225  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.342344  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0130 22:19:31.342364  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0130 22:19:31.342381  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.342456  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.342572  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.342787  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.342807  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.342806  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.343515  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.344047  681007 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:19:31.344096  681007 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:19:31.345163  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346044  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.346073  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.346341  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.346515  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.346617  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.346703  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.360658  681007 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39497
	I0130 22:19:31.361009  681007 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:19:31.361631  681007 main.go:141] libmachine: Using API Version  1
	I0130 22:19:31.361653  681007 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:19:31.362059  681007 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:19:31.362284  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetState
	I0130 22:19:31.363819  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .DriverName
	I0130 22:19:31.364079  681007 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.364091  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:19:31.364104  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHHostname
	I0130 22:19:31.367056  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367482  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:7c:86", ip: ""} in network mk-default-k8s-diff-port-850803: {Iface:virbr2 ExpiryTime:2024-01-30 23:14:01 +0000 UTC Type:0 Mac:52:54:00:b1:7c:86 Iaid: IPaddr:192.168.50.254 Prefix:24 Hostname:default-k8s-diff-port-850803 Clientid:01:52:54:00:b1:7c:86}
	I0130 22:19:31.367508  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | domain default-k8s-diff-port-850803 has defined IP address 192.168.50.254 and MAC address 52:54:00:b1:7c:86 in network mk-default-k8s-diff-port-850803
	I0130 22:19:31.367705  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHPort
	I0130 22:19:31.367877  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHKeyPath
	I0130 22:19:31.368024  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .GetSSHUsername
	I0130 22:19:31.368159  681007 sshutil.go:53] new ssh client: &{IP:192.168.50.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/default-k8s-diff-port-850803/id_rsa Username:docker}
	I0130 22:19:31.486668  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:19:31.512324  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:19:31.548212  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0130 22:19:31.548241  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0130 22:19:31.565423  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:19:31.607291  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0130 22:19:31.607318  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0130 22:19:31.647162  681007 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.647192  681007 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0130 22:19:31.723006  681007 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0130 22:19:31.913300  681007 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-850803" context rescaled to 1 replicas
	I0130 22:19:31.913355  681007 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.254 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:19:31.915323  681007 out.go:177] * Verifying Kubernetes components...
	I0130 22:19:31.916700  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:19:33.003770  681007 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.517052198s)
	I0130 22:19:33.003803  681007 start.go:929] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0130 22:19:33.533121  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.020753837s)
	I0130 22:19:33.533193  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533208  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533167  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.967690921s)
	I0130 22:19:33.533306  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533322  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533701  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533714  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533727  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533728  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.533738  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533747  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533745  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533759  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.533769  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.533802  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.533973  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.533987  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.535503  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.535515  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.535531  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.628879  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.628911  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.629222  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.629249  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.629251  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) DBG | Closing plugin on server side
	I0130 22:19:33.742264  681007 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.825530161s)
	I0130 22:19:33.742301  681007 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.019251933s)
	I0130 22:19:33.742328  681007 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.742355  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742371  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.742681  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.742701  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.742712  681007 main.go:141] libmachine: Making call to close driver server
	I0130 22:19:33.742736  681007 main.go:141] libmachine: (default-k8s-diff-port-850803) Calling .Close
	I0130 22:19:33.743035  681007 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:19:33.743058  681007 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:19:33.743072  681007 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-850803"
	I0130 22:19:33.745046  681007 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0130 22:19:33.746494  681007 addons.go:505] enable addons completed in 2.455579767s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0130 22:19:33.792738  681007 node_ready.go:49] node "default-k8s-diff-port-850803" has status "Ready":"True"
	I0130 22:19:33.792765  681007 node_ready.go:38] duration metric: took 50.422631ms waiting for node "default-k8s-diff-port-850803" to be "Ready" ...
	I0130 22:19:33.792774  681007 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:19:33.814090  681007 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:32.853930  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.854970  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:34.821685  681007 pod_ready.go:92] pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.821713  681007 pod_ready.go:81] duration metric: took 1.007586687s waiting for pod "coredns-5dd5756b68-z27l8" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.821725  681007 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827824  681007 pod_ready.go:92] pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.827846  681007 pod_ready.go:81] duration metric: took 6.114329ms waiting for pod "etcd-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.827855  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835557  681007 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.835577  681007 pod_ready.go:81] duration metric: took 7.716283ms waiting for pod "kube-apiserver-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.835586  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846707  681007 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:34.846730  681007 pod_ready.go:81] duration metric: took 11.137144ms waiting for pod "kube-controller-manager-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:34.846742  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855583  681007 pod_ready.go:92] pod "kube-proxy-9b97q" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:35.855607  681007 pod_ready.go:81] duration metric: took 1.00885903s waiting for pod "kube-proxy-9b97q" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:35.855616  681007 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146642  681007 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace has status "Ready":"True"
	I0130 22:19:36.146669  681007 pod_ready.go:81] duration metric: took 291.044646ms waiting for pod "kube-scheduler-default-k8s-diff-port-850803" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:36.146679  681007 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	I0130 22:19:38.154183  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:37.354609  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:39.854928  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:40.154641  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:42.159531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:41.855320  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.354523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:44.654954  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:47.154579  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:46.355021  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:48.853459  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:49.653829  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:51.655608  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:50.853891  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:52.854695  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:55.354018  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:54.154453  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:56.155065  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:58.657247  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:19:57.853975  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:00.354902  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:01.153907  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:03.654237  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:02.854731  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:05.356880  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:06.155143  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:08.155296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:07.856132  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.356464  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:10.155799  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.654333  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:12.853942  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.354885  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:15.154056  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.154535  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:17.853402  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:20.353980  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:19.655422  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.154392  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:22.354117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.355044  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:24.155171  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.655471  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:26.854532  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.354204  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:29.154677  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.654466  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:31.356403  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:33.356906  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:34.154078  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:36.654298  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:35.853262  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:37.857523  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:40.354097  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:39.154049  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:41.654457  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:43.654895  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:42.355195  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:44.854639  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:45.655775  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:48.155289  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:47.357754  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:49.855799  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:50.155498  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.655409  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:52.353449  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:54.354453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:55.155034  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:57.654844  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:56.354612  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:58.854992  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:20:59.655694  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.656577  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:01.353141  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:03.353830  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:04.154299  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:06.654312  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.654807  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:05.854650  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:08.353951  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.354031  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:10.655061  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.655432  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:12.354994  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:14.855265  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:15.159097  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:17.653783  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:16.857702  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.359396  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:19.655858  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:22.156091  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:21.854394  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.354360  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:24.655296  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:27.158080  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:26.855014  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.356117  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:29.653580  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:32.154606  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:31.854704  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.355484  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:34.654068  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.654158  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.654269  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:36.357452  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:38.855223  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:40.655689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.154796  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:41.354371  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:43.854228  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:45.155130  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:47.155889  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:46.355266  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:48.355485  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:50.362578  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:49.653701  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:51.655019  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:52.854642  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:55.353605  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:54.154411  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:56.654614  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:58.660728  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:21:57.854182  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:00.354287  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:01.155135  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:03.654733  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:02.853711  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:04.854845  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:05.656121  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:08.154541  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:07.353888  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:09.354542  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:10.653671  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:12.657917  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:11.854575  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:14.354327  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:15.157012  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:17.158822  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:16.354558  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:18.355214  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:19.655591  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.154262  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:20.855145  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:22.855595  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:25.354646  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:24.654590  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:26.655050  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:27.357453  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.854619  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:29.154225  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.156000  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:33.654263  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:31.855106  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:34.354611  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:35.654550  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:37.654631  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:36.856135  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.354424  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:39.655008  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.657897  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.659483  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:41.354687  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:43.354978  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:46.154172  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:48.154643  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:45.853374  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:47.854345  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.353899  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:50.655054  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.655335  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:52.354795  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.853217  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:54.655525  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:57.153994  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:56.856987  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.353446  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:22:59.157129  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.655835  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.657302  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:01.355499  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:03.356368  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:06.154373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:08.654373  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854404  680821 pod_ready.go:102] pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:05.854432  680821 pod_ready.go:81] duration metric: took 4m0.008096056s waiting for pod "metrics-server-57f55c9bc5-vhxng" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:05.854442  680821 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:05.854449  680821 pod_ready.go:38] duration metric: took 4m1.997150293s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:05.854467  680821 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:05.854502  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:05.854561  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:05.929032  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:05.929061  680821 cri.go:89] found id: ""
	I0130 22:23:05.929073  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:05.929137  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.934693  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:05.934777  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:05.982312  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:05.982342  680821 cri.go:89] found id: ""
	I0130 22:23:05.982352  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:05.982417  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:05.986932  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:05.986988  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:06.031983  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.032007  680821 cri.go:89] found id: ""
	I0130 22:23:06.032015  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:06.032073  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.036373  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:06.036429  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:06.084796  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.084829  680821 cri.go:89] found id: ""
	I0130 22:23:06.084840  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:06.084908  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.089120  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:06.089185  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:06.139977  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.139998  680821 cri.go:89] found id: ""
	I0130 22:23:06.140006  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:06.140063  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.144088  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:06.144147  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:06.185075  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.185103  680821 cri.go:89] found id: ""
	I0130 22:23:06.185113  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:06.185164  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.189014  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:06.189070  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:06.223430  680821 cri.go:89] found id: ""
	I0130 22:23:06.223459  680821 logs.go:284] 0 containers: []
	W0130 22:23:06.223469  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:06.223477  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:06.223529  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:06.260048  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.260071  680821 cri.go:89] found id: ""
	I0130 22:23:06.260083  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:06.260141  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:06.263987  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:06.264013  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:06.315899  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:06.315930  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:06.366903  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:06.366935  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:06.406395  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:06.406429  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:06.445937  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:06.445967  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:06.507335  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:06.507368  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:06.559276  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:06.559313  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:06.618349  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:06.618390  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:06.660376  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:06.660410  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:07.080461  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:07.080504  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:07.153607  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.153767  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.176441  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:07.176475  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:07.191016  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:07.191045  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:07.338888  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.338919  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:07.339094  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:07.339109  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:07.339121  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:07.339129  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:07.339142  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:10.656229  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:13.154689  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:15.156258  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.654584  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:17.340518  680821 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:17.358757  680821 api_server.go:72] duration metric: took 4m15.748181205s to wait for apiserver process to appear ...
	I0130 22:23:17.358785  680821 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:17.358824  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:17.358882  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:17.402796  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:17.402819  680821 cri.go:89] found id: ""
	I0130 22:23:17.402827  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:17.402878  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.408452  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:17.408525  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:17.454148  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.454174  680821 cri.go:89] found id: ""
	I0130 22:23:17.454185  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:17.454260  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.458375  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:17.458450  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:17.508924  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:17.508953  680821 cri.go:89] found id: ""
	I0130 22:23:17.508960  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:17.509011  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.512833  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:17.512900  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:17.556821  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:17.556849  680821 cri.go:89] found id: ""
	I0130 22:23:17.556857  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:17.556913  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.561605  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:17.561666  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:17.604962  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.604991  680821 cri.go:89] found id: ""
	I0130 22:23:17.605001  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:17.605078  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.611321  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:17.611395  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:17.651827  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:17.651860  680821 cri.go:89] found id: ""
	I0130 22:23:17.651869  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:17.651918  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.656414  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:17.656472  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:17.696085  680821 cri.go:89] found id: ""
	I0130 22:23:17.696120  680821 logs.go:284] 0 containers: []
	W0130 22:23:17.696130  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:17.696139  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:17.696197  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:17.742145  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.742171  680821 cri.go:89] found id: ""
	I0130 22:23:17.742183  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:17.742248  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:17.746837  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:17.746861  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:17.864654  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:17.864691  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:17.917753  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:17.917785  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:17.958876  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:17.958914  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:17.997774  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:17.997811  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:18.047778  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:18.047823  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:18.111572  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:18.111621  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:18.489601  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:18.489683  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:18.549905  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:18.549953  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:18.631865  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.632060  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.656777  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:18.656813  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:18.670944  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:18.670973  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:18.726388  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:18.726424  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:18.766317  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766350  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:18.766427  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:18.766446  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:18.766460  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:18.766473  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:18.766485  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:20.155531  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:22.654846  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:25.153520  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:27.158571  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:28.767516  680821 api_server.go:253] Checking apiserver healthz at https://192.168.72.213:8443/healthz ...
	I0130 22:23:28.774562  680821 api_server.go:279] https://192.168.72.213:8443/healthz returned 200:
	ok
	I0130 22:23:28.775796  680821 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:28.775824  680821 api_server.go:131] duration metric: took 11.417031075s to wait for apiserver health ...
	I0130 22:23:28.775834  680821 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:28.775860  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:28.775909  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:28.821439  680821 cri.go:89] found id: "59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:28.821462  680821 cri.go:89] found id: ""
	I0130 22:23:28.821490  680821 logs.go:284] 1 containers: [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef]
	I0130 22:23:28.821556  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.826438  680821 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:28.826495  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:28.870075  680821 cri.go:89] found id: "7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:28.870104  680821 cri.go:89] found id: ""
	I0130 22:23:28.870113  680821 logs.go:284] 1 containers: [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb]
	I0130 22:23:28.870169  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.874672  680821 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:28.874741  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:28.917733  680821 cri.go:89] found id: "3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:28.917761  680821 cri.go:89] found id: ""
	I0130 22:23:28.917775  680821 logs.go:284] 1 containers: [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810]
	I0130 22:23:28.917835  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.925522  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:28.925586  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:28.979761  680821 cri.go:89] found id: "30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:28.979793  680821 cri.go:89] found id: ""
	I0130 22:23:28.979803  680821 logs.go:284] 1 containers: [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c]
	I0130 22:23:28.979866  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:28.983990  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:28.984044  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:29.022516  680821 cri.go:89] found id: "40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.022543  680821 cri.go:89] found id: ""
	I0130 22:23:29.022553  680821 logs.go:284] 1 containers: [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568]
	I0130 22:23:29.022604  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.026989  680821 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:29.027069  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:29.065167  680821 cri.go:89] found id: "57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.065194  680821 cri.go:89] found id: ""
	I0130 22:23:29.065205  680821 logs.go:284] 1 containers: [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8]
	I0130 22:23:29.065268  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.069436  680821 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:29.069512  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:29.109503  680821 cri.go:89] found id: ""
	I0130 22:23:29.109532  680821 logs.go:284] 0 containers: []
	W0130 22:23:29.109539  680821 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:29.109546  680821 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:29.109599  680821 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:29.158319  680821 cri.go:89] found id: "c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:29.158343  680821 cri.go:89] found id: ""
	I0130 22:23:29.158350  680821 logs.go:284] 1 containers: [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267]
	I0130 22:23:29.158437  680821 ssh_runner.go:195] Run: which crictl
	I0130 22:23:29.163004  680821 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:29.163025  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:29.540158  680821 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:29.540203  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:29.616783  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:29.616947  680821 logs.go:138] Found kubelet problem: Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:29.638172  680821 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:29.638207  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:29.761562  680821 logs.go:123] Gathering logs for coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] ...
	I0130 22:23:29.761613  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810"
	I0130 22:23:29.803930  680821 logs.go:123] Gathering logs for kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] ...
	I0130 22:23:29.803976  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c"
	I0130 22:23:29.866722  680821 logs.go:123] Gathering logs for kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] ...
	I0130 22:23:29.866763  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568"
	I0130 22:23:29.912093  680821 logs.go:123] Gathering logs for kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] ...
	I0130 22:23:29.912125  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8"
	I0130 22:23:29.970591  680821 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:29.970624  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:29.984722  680821 logs.go:123] Gathering logs for kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] ...
	I0130 22:23:29.984748  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef"
	I0130 22:23:30.040548  680821 logs.go:123] Gathering logs for etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] ...
	I0130 22:23:30.040589  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb"
	I0130 22:23:30.089982  680821 logs.go:123] Gathering logs for storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] ...
	I0130 22:23:30.090027  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267"
	I0130 22:23:30.128235  680821 logs.go:123] Gathering logs for container status ...
	I0130 22:23:30.128267  680821 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:30.169872  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.169906  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:30.169982  680821 out.go:239] X Problems detected in kubelet:
	W0130 22:23:30.169997  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: W0130 22:19:01.629421    3851 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	W0130 22:23:30.170008  680821 out.go:239]   Jan 30 22:19:01 embed-certs-713938 kubelet[3851]: E0130 22:19:01.629490    3851 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:embed-certs-713938" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'embed-certs-713938' and this object
	I0130 22:23:30.170026  680821 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:30.170035  680821 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:29.653518  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:32.155147  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:34.653672  681007 pod_ready.go:102] pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace has status "Ready":"False"
	I0130 22:23:36.155187  681007 pod_ready.go:81] duration metric: took 4m0.008494222s waiting for pod "metrics-server-57f55c9bc5-nkcv4" in "kube-system" namespace to be "Ready" ...
	E0130 22:23:36.155214  681007 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0130 22:23:36.155224  681007 pod_ready.go:38] duration metric: took 4m2.362439314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0130 22:23:36.155243  681007 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:23:36.155283  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:36.155343  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:36.205838  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:36.205866  681007 cri.go:89] found id: ""
	I0130 22:23:36.205875  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:36.205945  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.210477  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:36.210558  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:36.253110  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:36.253139  681007 cri.go:89] found id: ""
	I0130 22:23:36.253148  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:36.253204  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.257054  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:36.257124  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:36.296932  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.296959  681007 cri.go:89] found id: ""
	I0130 22:23:36.296971  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:36.297034  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.301030  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:36.301080  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:36.339966  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:36.339992  681007 cri.go:89] found id: ""
	I0130 22:23:36.340002  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:36.340062  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.345411  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:36.345474  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:36.389010  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.389031  681007 cri.go:89] found id: ""
	I0130 22:23:36.389039  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:36.389091  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.392885  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:36.392969  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:36.430208  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:36.430228  681007 cri.go:89] found id: ""
	I0130 22:23:36.430237  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:36.430282  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.434507  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:36.434562  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:36.483517  681007 cri.go:89] found id: ""
	I0130 22:23:36.483542  681007 logs.go:284] 0 containers: []
	W0130 22:23:36.483549  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:36.483555  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:36.483613  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:36.543345  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:36.543370  681007 cri.go:89] found id: ""
	I0130 22:23:36.543380  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:36.543445  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:36.548033  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:36.548064  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:36.630123  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630304  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630456  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:36.630629  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:36.651951  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:36.651990  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:36.667227  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:36.667261  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:36.815056  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:36.815097  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:36.856960  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:36.856992  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:36.903856  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:36.903909  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:37.318919  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:37.318964  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:37.368999  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:37.369037  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:37.412698  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:37.412727  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:37.459356  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:37.459389  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:37.509418  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:37.509454  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:37.551349  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:37.551392  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:37.597863  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597892  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:37.597945  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:37.597958  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597964  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597976  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:37.597982  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:37.597988  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:37.597998  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:40.180631  680821 system_pods.go:59] 8 kube-system pods found
	I0130 22:23:40.180660  680821 system_pods.go:61] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.180665  680821 system_pods.go:61] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.180669  680821 system_pods.go:61] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.180674  680821 system_pods.go:61] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.180678  680821 system_pods.go:61] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.180683  680821 system_pods.go:61] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.180693  680821 system_pods.go:61] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.180701  680821 system_pods.go:61] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.180710  680821 system_pods.go:74] duration metric: took 11.404869748s to wait for pod list to return data ...
	I0130 22:23:40.180749  680821 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:23:40.184327  680821 default_sa.go:45] found service account: "default"
	I0130 22:23:40.184349  680821 default_sa.go:55] duration metric: took 3.590968ms for default service account to be created ...
	I0130 22:23:40.184356  680821 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:23:40.194745  680821 system_pods.go:86] 8 kube-system pods found
	I0130 22:23:40.194769  680821 system_pods.go:89] "coredns-5dd5756b68-l6hkm" [6309cb30-acf7-4925-996d-f059ffe5d3c1] Running
	I0130 22:23:40.194774  680821 system_pods.go:89] "etcd-embed-certs-713938" [cd4e6344-adba-4548-826f-10b14040a8ad] Running
	I0130 22:23:40.194779  680821 system_pods.go:89] "kube-apiserver-embed-certs-713938" [9de768d3-052b-4d82-ac75-de14aed8547d] Running
	I0130 22:23:40.194784  680821 system_pods.go:89] "kube-controller-manager-embed-certs-713938" [60c1fc79-eb23-4935-b944-66b3e0634412] Running
	I0130 22:23:40.194788  680821 system_pods.go:89] "kube-proxy-f7mgv" [57f78a6b-c2f9-471e-9861-8b74fd700ecf] Running
	I0130 22:23:40.194791  680821 system_pods.go:89] "kube-scheduler-embed-certs-713938" [02d19989-98c7-4755-be2d-2fe5c7e51a50] Running
	I0130 22:23:40.194800  680821 system_pods.go:89] "metrics-server-57f55c9bc5-vhxng" [87663986-4226-44fc-9eea-43dd94a12fae] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:23:40.194805  680821 system_pods.go:89] "storage-provisioner" [d2812b55-cbd5-411d-b217-0b902e49285b] Running
	I0130 22:23:40.194812  680821 system_pods.go:126] duration metric: took 10.451241ms to wait for k8s-apps to be running ...
	I0130 22:23:40.194817  680821 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:23:40.194866  680821 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:23:40.214067  680821 system_svc.go:56] duration metric: took 19.241185ms WaitForService to wait for kubelet.
	I0130 22:23:40.214091  680821 kubeadm.go:581] duration metric: took 4m38.603520566s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:23:40.214134  680821 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:23:40.217725  680821 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:23:40.217791  680821 node_conditions.go:123] node cpu capacity is 2
	I0130 22:23:40.217812  680821 node_conditions.go:105] duration metric: took 3.672364ms to run NodePressure ...
	I0130 22:23:40.217827  680821 start.go:228] waiting for startup goroutines ...
	I0130 22:23:40.217840  680821 start.go:233] waiting for cluster config update ...
	I0130 22:23:40.217857  680821 start.go:242] writing updated cluster config ...
	I0130 22:23:40.218114  680821 ssh_runner.go:195] Run: rm -f paused
	I0130 22:23:40.275722  680821 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:23:40.278571  680821 out.go:177] * Done! kubectl is now configured to use "embed-certs-713938" cluster and "default" namespace by default
	I0130 22:23:47.599324  681007 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:23:47.615605  681007 api_server.go:72] duration metric: took 4m15.702208866s to wait for apiserver process to appear ...
	I0130 22:23:47.615630  681007 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:23:47.615671  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:47.615722  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:47.660944  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:47.660980  681007 cri.go:89] found id: ""
	I0130 22:23:47.660997  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:47.661051  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.666115  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:47.666180  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:47.709726  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:47.709750  681007 cri.go:89] found id: ""
	I0130 22:23:47.709760  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:47.709821  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.714636  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:47.714691  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:47.760216  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:47.760245  681007 cri.go:89] found id: ""
	I0130 22:23:47.760262  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:47.760323  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.765395  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:47.765450  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:47.815572  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:47.815604  681007 cri.go:89] found id: ""
	I0130 22:23:47.815614  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:47.815674  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.819670  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:47.819729  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:47.858767  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:47.858795  681007 cri.go:89] found id: ""
	I0130 22:23:47.858805  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:47.858865  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.863151  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:47.863276  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:47.911294  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:47.911319  681007 cri.go:89] found id: ""
	I0130 22:23:47.911327  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:47.911387  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.915772  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:47.915852  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:47.952096  681007 cri.go:89] found id: ""
	I0130 22:23:47.952125  681007 logs.go:284] 0 containers: []
	W0130 22:23:47.952136  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:47.952144  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:47.952229  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:47.990137  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:47.990162  681007 cri.go:89] found id: ""
	I0130 22:23:47.990170  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:47.990228  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:47.994880  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:23:47.994902  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:23:48.068521  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068700  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.068849  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.069010  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.091781  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:48.091820  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:23:48.213688  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:23:48.213724  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:48.264200  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:48.264234  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:48.319751  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:48.319785  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:48.357815  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:23:48.357846  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:23:48.406822  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:48.406858  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:48.419822  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:23:48.419852  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:48.471685  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:23:48.471719  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:48.508040  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:48.508088  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:48.559268  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:23:48.559302  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:48.609976  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:48.610007  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:48.966774  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966810  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:23:48.966900  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:23:48.966912  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966919  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966927  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:23:48.966934  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:23:48.966939  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:23:48.966945  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:23:58.967938  681007 api_server.go:253] Checking apiserver healthz at https://192.168.50.254:8444/healthz ...
	I0130 22:23:58.973850  681007 api_server.go:279] https://192.168.50.254:8444/healthz returned 200:
	ok
	I0130 22:23:58.975689  681007 api_server.go:141] control plane version: v1.28.4
	I0130 22:23:58.975713  681007 api_server.go:131] duration metric: took 11.360076324s to wait for apiserver health ...
	I0130 22:23:58.975720  681007 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:23:58.975745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0130 22:23:58.975793  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0130 22:23:59.023436  681007 cri.go:89] found id: "a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:23:59.023458  681007 cri.go:89] found id: ""
	I0130 22:23:59.023466  681007 logs.go:284] 1 containers: [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143]
	I0130 22:23:59.023514  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.027855  681007 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0130 22:23:59.027916  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0130 22:23:59.067167  681007 cri.go:89] found id: "1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:23:59.067194  681007 cri.go:89] found id: ""
	I0130 22:23:59.067204  681007 logs.go:284] 1 containers: [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7]
	I0130 22:23:59.067266  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.076124  681007 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0130 22:23:59.076191  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0130 22:23:59.115918  681007 cri.go:89] found id: "226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:23:59.115947  681007 cri.go:89] found id: ""
	I0130 22:23:59.115956  681007 logs.go:284] 1 containers: [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb]
	I0130 22:23:59.116011  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.120440  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0130 22:23:59.120489  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0130 22:23:59.165157  681007 cri.go:89] found id: "c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.165185  681007 cri.go:89] found id: ""
	I0130 22:23:59.165194  681007 logs.go:284] 1 containers: [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260]
	I0130 22:23:59.165254  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.169774  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0130 22:23:59.169845  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0130 22:23:59.230609  681007 cri.go:89] found id: "39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:23:59.230640  681007 cri.go:89] found id: ""
	I0130 22:23:59.230650  681007 logs.go:284] 1 containers: [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0]
	I0130 22:23:59.230713  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.235563  681007 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0130 22:23:59.235653  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0130 22:23:59.279835  681007 cri.go:89] found id: "bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.279866  681007 cri.go:89] found id: ""
	I0130 22:23:59.279886  681007 logs.go:284] 1 containers: [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288]
	I0130 22:23:59.279954  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.284745  681007 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0130 22:23:59.284809  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0130 22:23:59.331328  681007 cri.go:89] found id: ""
	I0130 22:23:59.331361  681007 logs.go:284] 0 containers: []
	W0130 22:23:59.331374  681007 logs.go:286] No container was found matching "kindnet"
	I0130 22:23:59.331380  681007 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0130 22:23:59.331432  681007 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0130 22:23:59.370468  681007 cri.go:89] found id: "43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.370493  681007 cri.go:89] found id: ""
	I0130 22:23:59.370501  681007 logs.go:284] 1 containers: [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52]
	I0130 22:23:59.370553  681007 ssh_runner.go:195] Run: which crictl
	I0130 22:23:59.375047  681007 logs.go:123] Gathering logs for kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] ...
	I0130 22:23:59.375075  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260"
	I0130 22:23:59.428263  681007 logs.go:123] Gathering logs for kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] ...
	I0130 22:23:59.428297  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288"
	I0130 22:23:59.495321  681007 logs.go:123] Gathering logs for storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] ...
	I0130 22:23:59.495356  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52"
	I0130 22:23:59.537553  681007 logs.go:123] Gathering logs for CRI-O ...
	I0130 22:23:59.537590  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0130 22:23:59.915651  681007 logs.go:123] Gathering logs for dmesg ...
	I0130 22:23:59.915691  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0130 22:23:59.930178  681007 logs.go:123] Gathering logs for describe nodes ...
	I0130 22:23:59.930209  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0130 22:24:00.070621  681007 logs.go:123] Gathering logs for coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] ...
	I0130 22:24:00.070656  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb"
	I0130 22:24:00.111617  681007 logs.go:123] Gathering logs for kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] ...
	I0130 22:24:00.111655  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0"
	I0130 22:24:00.156067  681007 logs.go:123] Gathering logs for container status ...
	I0130 22:24:00.156104  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0130 22:24:00.206264  681007 logs.go:123] Gathering logs for kubelet ...
	I0130 22:24:00.206292  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0130 22:24:00.282212  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282436  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282642  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.282805  681007 logs.go:138] Found kubelet problem: Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.304194  681007 logs.go:123] Gathering logs for kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] ...
	I0130 22:24:00.304223  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143"
	I0130 22:24:00.355473  681007 logs.go:123] Gathering logs for etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] ...
	I0130 22:24:00.355508  681007 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7"
	I0130 22:24:00.402962  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403001  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	W0130 22:24:00.403077  681007 out.go:239] X Problems detected in kubelet:
	W0130 22:24:00.403092  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.991861    3839 reflector.go:535] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403101  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.991991    3839 reflector.go:147] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403114  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: W0130 22:19:30.994050    3839 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	W0130 22:24:00.403124  681007 out.go:239]   Jan 30 22:19:30 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:19:30.994105    3839 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:default-k8s-diff-port-850803" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'default-k8s-diff-port-850803' and this object
	I0130 22:24:00.403136  681007 out.go:309] Setting ErrFile to fd 2...
	I0130 22:24:00.403144  681007 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:24:10.411200  681007 system_pods.go:59] 8 kube-system pods found
	I0130 22:24:10.411225  681007 system_pods.go:61] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.411231  681007 system_pods.go:61] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.411235  681007 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.411239  681007 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.411242  681007 system_pods.go:61] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.411246  681007 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.411252  681007 system_pods.go:61] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.411258  681007 system_pods.go:61] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.411264  681007 system_pods.go:74] duration metric: took 11.435539762s to wait for pod list to return data ...
	I0130 22:24:10.411274  681007 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:24:10.413887  681007 default_sa.go:45] found service account: "default"
	I0130 22:24:10.413915  681007 default_sa.go:55] duration metric: took 2.635544ms for default service account to be created ...
	I0130 22:24:10.413923  681007 system_pods.go:116] waiting for k8s-apps to be running ...
	I0130 22:24:10.420235  681007 system_pods.go:86] 8 kube-system pods found
	I0130 22:24:10.420256  681007 system_pods.go:89] "coredns-5dd5756b68-z27l8" [1ff9627e-373c-45d3-87dc-281daaf057e1] Running
	I0130 22:24:10.420263  681007 system_pods.go:89] "etcd-default-k8s-diff-port-850803" [6efe7b93-6775-4006-a995-3ca4c3f7d26a] Running
	I0130 22:24:10.420271  681007 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-850803" [e2446e03-7cdb-4a6c-909a-d2c59b899761] Running
	I0130 22:24:10.420281  681007 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-850803" [bc8a50ec-c002-437b-8ae2-4093ac67e25e] Running
	I0130 22:24:10.420290  681007 system_pods.go:89] "kube-proxy-9b97q" [b8b32be2-d1fd-4800-b4a4-3db0a23e97f1] Running
	I0130 22:24:10.420301  681007 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-850803" [02feab15-334a-4619-bffe-f5d1e5540279] Running
	I0130 22:24:10.420311  681007 system_pods.go:89] "metrics-server-57f55c9bc5-nkcv4" [8ff91827-4613-4a66-963b-9bec1c1493bc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0130 22:24:10.420319  681007 system_pods.go:89] "storage-provisioner" [a46524c4-645e-4d7e-b0f6-00e4a05f340c] Running
	I0130 22:24:10.420327  681007 system_pods.go:126] duration metric: took 6.398195ms to wait for k8s-apps to be running ...
	I0130 22:24:10.420335  681007 system_svc.go:44] waiting for kubelet service to be running ....
	I0130 22:24:10.420386  681007 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:24:10.438372  681007 system_svc.go:56] duration metric: took 18.027152ms WaitForService to wait for kubelet.
	I0130 22:24:10.438396  681007 kubeadm.go:581] duration metric: took 4m38.525004918s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0130 22:24:10.438424  681007 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:24:10.441514  681007 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:24:10.441561  681007 node_conditions.go:123] node cpu capacity is 2
	I0130 22:24:10.441572  681007 node_conditions.go:105] duration metric: took 3.14294ms to run NodePressure ...
	I0130 22:24:10.441583  681007 start.go:228] waiting for startup goroutines ...
	I0130 22:24:10.441591  681007 start.go:233] waiting for cluster config update ...
	I0130 22:24:10.441601  681007 start.go:242] writing updated cluster config ...
	I0130 22:24:10.441855  681007 ssh_runner.go:195] Run: rm -f paused
	I0130 22:24:10.493274  681007 start.go:600] kubectl: 1.29.1, cluster: 1.28.4 (minor skew: 1)
	I0130 22:24:10.495414  681007 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-850803" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:14:23 UTC, ends at Tue 2024-01-30 22:32:45 UTC. --
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.014063540Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653965014052710,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=2e03c0fc-ea44-4f94-92f9-391444c537f4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.014827010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0eb2c37c-b87f-4c4b-80ca-cd941884f81d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.014874182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0eb2c37c-b87f-4c4b-80ca-cd941884f81d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.015092129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0eb2c37c-b87f-4c4b-80ca-cd941884f81d name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.061299018Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=eff3472d-5a6a-43a0-b286-86bc9ffa380e name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.061385281Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=eff3472d-5a6a-43a0-b286-86bc9ffa380e name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.063070314Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=4d982006-a9ce-4477-bafc-0d1a71361e4e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.063652395Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653965063632791,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=4d982006-a9ce-4477-bafc-0d1a71361e4e name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.064434332Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fb5cdfa5-178f-4caa-a3fb-e03f92d53398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.064499954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fb5cdfa5-178f-4caa-a3fb-e03f92d53398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.064769686Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fb5cdfa5-178f-4caa-a3fb-e03f92d53398 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.111378375Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=dfa766f5-3405-4c38-b55b-1e93a9878df7 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.111438703Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=dfa766f5-3405-4c38-b55b-1e93a9878df7 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.112742984Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=1473226e-d3dc-42f3-8f34-55f179a81d20 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.113238677Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653965113222251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=1473226e-d3dc-42f3-8f34-55f179a81d20 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.113994370Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=98af4b23-6730-406a-8086-95a0dea38b11 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.114041211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=98af4b23-6730-406a-8086-95a0dea38b11 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.114498297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=98af4b23-6730-406a-8086-95a0dea38b11 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.157220666Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=34ada56b-e564-4b20-b0e9-49506fa07a04 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.157301036Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=34ada56b-e564-4b20-b0e9-49506fa07a04 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.158704402Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa93e62e-4db9-4cf6-9a84-a96ca59a1f73 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.159060503Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706653965159048737,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:114472,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fa93e62e-4db9-4cf6-9a84-a96ca59a1f73 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.159670079Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8b29a378-77df-41da-aa83-cf67f348eaa1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.159712571Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8b29a378-77df-41da-aa83-cf67f348eaa1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:32:45 old-k8s-version-912992 crio[711]: time="2024-01-30 22:32:45.159902643Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706652931517177664,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7cf78188f881059f018f9e84d28221381653ed558c06e74c2d07439afd45d381,PodSandboxId:7fc3fdb2178813e5700175c1ecd70f8a649147e0f8a4be4c140872e3362006e9,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1706652903428174699,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: a24f5188-6b75-4de9-8a25-84a67697bd40,},Annotations:map[string]string{io.kubernetes.container.hash: 833b78b1,io.kubernetes.container.restartCount: 0,io.kubernetes.contai
ner.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7,PodSandboxId:061e6cd887b0296917c4e17f14bc4ef0407ec7ef7a5ee56ec2bec783ac319f31,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1706652902600009875,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-7wr8t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b6a3982-1256-41e6-9311-1195746df25a,},Annotations:map[string]string{io.kubernetes.container.hash: a6174d05,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\
":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324,PodSandboxId:2e2da6c5a177c21cf7b7a5df4d4bbc9ad1d4c1be2012abcf5d39196fd6efe79d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1706652901027831483,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-qm7xx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4a8cca85-87c7-4d02-b5cd
-4bb83bd5ef7d,},Annotations:map[string]string{io.kubernetes.container.hash: d44458e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f,PodSandboxId:8f2e98ca685442dc1eb1b927978adeb9bc28bff5934f4f39476042938c559411,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1706652900550781457,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9cb43cc9-7d15-41a9-90b6-66
fc99fa67e5,},Annotations:map[string]string{io.kubernetes.container.hash: 7731a4d3,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59,PodSandboxId:c502e6abcef1d2afb4ca56b8d15b1e09835b67a1889283c6dc1e1fd05c3cf583,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1706652893705535492,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc1c785143b8b75ceb521c2487b9ea18,},Annotations:map[string]string{io.kuber
netes.container.hash: 5a1815b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184,PodSandboxId:1789c91b615b49d4761385026a26ec8aa5775e69d2e1a99209298299158a71e6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1706652892260071302,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933,PodSandboxId:89c866f1c9af578f7ed6760df325710e099aea335423d695b167b6956dd6ab31,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1706652891628531949,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]s
tring{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4,PodSandboxId:3ab003d2bc5c39143b8eaa727aa8adf156a75cfbc4b16cf6f40df29d7f708672,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1706652891607687060,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-912992,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 32c367f5dfa3e794388fc594b045f44b,},Annotations:map[string]string{io.ku
bernetes.container.hash: 86216a92,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8b29a378-77df-41da-aa83-cf67f348eaa1 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	1b2c0e91a4312       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       1                   8f2e98ca68544       storage-provisioner
	7cf78188f8810       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   17 minutes ago      Running             busybox                   0                   7fc3fdb217881       busybox
	61ab23a25f123       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      17 minutes ago      Running             coredns                   0                   061e6cd887b02       coredns-5644d7b6d9-7wr8t
	4c48b0d429b38       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      17 minutes ago      Running             kube-proxy                0                   2e2da6c5a177c       kube-proxy-qm7xx
	ddad721f8f253       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Exited              storage-provisioner       0                   8f2e98ca68544       storage-provisioner
	2123e32c8a2e1       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      17 minutes ago      Running             etcd                      0                   c502e6abcef1d       etcd-old-k8s-version-912992
	15f24b3dcf08a       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      17 minutes ago      Running             kube-scheduler            0                   1789c91b615b4       kube-scheduler-old-k8s-version-912992
	dbd8457575a94       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      17 minutes ago      Running             kube-controller-manager   0                   89c866f1c9af5       kube-controller-manager-old-k8s-version-912992
	642acc732ea38       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      17 minutes ago      Running             kube-apiserver            0                   3ab003d2bc5c3       kube-apiserver-old-k8s-version-912992
	
	
	==> coredns [61ab23a25f1233ac6ec3eb245fba7c188811e8822dd88b97227d7d820ba911f7] <==
	.:53
	2024-01-30T22:04:20.926Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T22:04:20.926Z [INFO] CoreDNS-1.6.2
	2024-01-30T22:04:20.926Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T22:04:22.070Z [INFO] 127.0.0.1:50418 - 56786 "HINFO IN 7115203054942692213.8487583809034102998. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.144213359s
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2024-01-30T22:15:02.876Z [INFO] plugin/reload: Running configuration MD5 = 73c7bdb6903c83cd433a46b2e9eb4233
	2024-01-30T22:15:02.877Z [INFO] CoreDNS-1.6.2
	2024-01-30T22:15:02.877Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2024-01-30T22:15:02.924Z [INFO] 127.0.0.1:41742 - 55787 "HINFO IN 5107901589387885354.4389539816333725312. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.046383682s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-912992
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-912992
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=old-k8s-version-912992
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_04_02_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:03:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:32:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:32:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:32:28 +0000   Tue, 30 Jan 2024 22:03:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:32:28 +0000   Tue, 30 Jan 2024 22:15:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.84
	  Hostname:    old-k8s-version-912992
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 2b328d8096d94a12b7148e9c4c55cb20
	 System UUID:                2b328d80-96d9-4a12-b714-8e9c4c55cb20
	 Boot ID:                    6423afe2-37ad-40e5-b3cd-05296015b92f
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                coredns-5644d7b6d9-7wr8t                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                etcd-old-k8s-version-912992                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-apiserver-old-k8s-version-912992             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-912992    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-proxy-qm7xx                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                kube-scheduler-old-k8s-version-912992             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                metrics-server-74d5856cc6-w74c9                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientPID
	  Normal  Starting                 28m                kube-proxy, old-k8s-version-912992  Starting kube-proxy.
	  Normal  Starting                 17m                kubelet, old-k8s-version-912992     Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x7 over 17m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet, old-k8s-version-912992     Node old-k8s-version-912992 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  17m                kubelet, old-k8s-version-912992     Updated Node Allocatable limit across pods
	  Normal  Starting                 17m                kube-proxy, old-k8s-version-912992  Starting kube-proxy.
	
	
	==> dmesg <==
	[Jan30 22:14] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.074838] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.918628] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.461539] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.164732] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.497368] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000059] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +5.527916] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.119104] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.164230] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.122803] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.225326] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +18.102883] systemd-fstab-generator[1014]: Ignoring "noauto" for root device
	[  +0.415774] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Jan30 22:15] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [2123e32c8a2e1ddf405153d75a7214cc818c9b4715098eea8fe82fded748cf59] <==
	2024-01-30 22:14:53.830205 I | etcdserver: restarting member 9759e6b18ded37f5 in cluster 5f38fc1d36b986e7 at commit index 540
	2024-01-30 22:14:53.830333 I | raft: 9759e6b18ded37f5 became follower at term 2
	2024-01-30 22:14:53.830367 I | raft: newRaft 9759e6b18ded37f5 [peers: [], term: 2, commit: 540, applied: 0, lastindex: 540, lastterm: 2]
	2024-01-30 22:14:53.839996 W | auth: simple token is not cryptographically signed
	2024-01-30 22:14:53.842978 I | etcdserver: starting server... [version: 3.3.15, cluster version: to_be_decided]
	2024-01-30 22:14:53.844741 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, ca = , trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2024-01-30 22:14:53.844882 I | embed: listening for metrics on http://192.168.39.84:2381
	2024-01-30 22:14:53.845715 I | embed: listening for metrics on http://127.0.0.1:2381
	2024-01-30 22:14:53.845922 I | etcdserver/membership: added member 9759e6b18ded37f5 [https://192.168.39.84:2380] to cluster 5f38fc1d36b986e7
	2024-01-30 22:14:53.846046 N | etcdserver/membership: set the initial cluster version to 3.3
	2024-01-30 22:14:53.846090 I | etcdserver/api: enabled capabilities for version 3.3
	2024-01-30 22:14:55.030802 I | raft: 9759e6b18ded37f5 is starting a new election at term 2
	2024-01-30 22:14:55.030898 I | raft: 9759e6b18ded37f5 became candidate at term 3
	2024-01-30 22:14:55.030925 I | raft: 9759e6b18ded37f5 received MsgVoteResp from 9759e6b18ded37f5 at term 3
	2024-01-30 22:14:55.030946 I | raft: 9759e6b18ded37f5 became leader at term 3
	2024-01-30 22:14:55.030963 I | raft: raft.node: 9759e6b18ded37f5 elected leader 9759e6b18ded37f5 at term 3
	2024-01-30 22:14:55.032860 I | etcdserver: published {Name:old-k8s-version-912992 ClientURLs:[https://192.168.39.84:2379]} to cluster 5f38fc1d36b986e7
	2024-01-30 22:14:55.033358 I | embed: ready to serve client requests
	2024-01-30 22:14:55.034526 I | embed: ready to serve client requests
	2024-01-30 22:14:55.036218 I | embed: serving client requests on 192.168.39.84:2379
	2024-01-30 22:14:55.036973 I | embed: serving client requests on 127.0.0.1:2379
	2024-01-30 22:24:55.069275 I | mvcc: store.index: compact 826
	2024-01-30 22:24:55.071598 I | mvcc: finished scheduled compaction at 826 (took 1.97441ms)
	2024-01-30 22:29:55.075377 I | mvcc: store.index: compact 1044
	2024-01-30 22:29:55.076969 I | mvcc: finished scheduled compaction at 1044 (took 934.445µs)
	
	
	==> kernel <==
	 22:32:45 up 18 min,  0 users,  load average: 0.20, 0.24, 0.18
	Linux old-k8s-version-912992 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [642acc732ea38dc603ba37e39af5571de18ee0a67b19ad84225fafb66ef67ab4] <==
	I0130 22:24:59.286039       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:24:59.286206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:24:59.286288       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:24:59.286300       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:25:59.286612       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:25:59.286737       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:25:59.286809       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:25:59.286826       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:27:59.287238       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:27:59.287571       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:27:59.287659       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:27:59.287708       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:29:59.288650       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:29:59.288752       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:29:59.288805       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:29:59.288812       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:30:59.289201       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0130 22:30:59.289311       1 handler_proxy.go:99] no RequestInfo found in the context
	E0130 22:30:59.289346       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:30:59.289353       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [dbd8457575a94be9d66fdf9ba658f1d397bc2bd9747ec7f80e6cab1707a76933] <==
	E0130 22:26:21.942513       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:26:28.629392       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:26:52.195022       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:27:00.631522       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:27:22.446877       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:27:32.633817       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:27:52.699327       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:28:04.637425       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:28:22.951369       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:28:36.639824       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:28:53.203587       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:29:08.642195       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:29:23.455335       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:29:40.644615       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:29:53.707169       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:30:12.647051       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:30:23.959211       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:30:44.649247       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:30:54.211247       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:31:16.651065       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:31:24.463747       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:31:48.653825       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:31:54.715899       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0130 22:32:20.656577       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0130 22:32:24.967550       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	
	==> kube-proxy [4c48b0d429b380c88a7403d354ab63d19522a510b1ea1890adabfc13a64cb324] <==
	W0130 22:04:19.978775       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 22:04:20.008477       1 node.go:135] Successfully retrieved node IP: 192.168.39.84
	I0130 22:04:20.008548       1 server_others.go:149] Using iptables Proxier.
	I0130 22:04:20.010459       1 server.go:529] Version: v1.16.0
	I0130 22:04:20.016078       1 config.go:313] Starting service config controller
	I0130 22:04:20.016134       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 22:04:20.016174       1 config.go:131] Starting endpoints config controller
	I0130 22:04:20.016342       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 22:04:20.120480       1 shared_informer.go:204] Caches are synced for service config 
	I0130 22:04:20.120748       1 shared_informer.go:204] Caches are synced for endpoints config 
	W0130 22:15:01.397449       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0130 22:15:01.499627       1 node.go:135] Successfully retrieved node IP: 192.168.39.84
	I0130 22:15:01.499703       1 server_others.go:149] Using iptables Proxier.
	I0130 22:15:01.523326       1 server.go:529] Version: v1.16.0
	I0130 22:15:01.533037       1 config.go:131] Starting endpoints config controller
	I0130 22:15:01.534607       1 config.go:313] Starting service config controller
	I0130 22:15:01.539592       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0130 22:15:01.539582       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0130 22:15:01.640573       1 shared_informer.go:204] Caches are synced for endpoints config 
	I0130 22:15:01.640660       1 shared_informer.go:204] Caches are synced for service config 
	
	
	==> kube-scheduler [15f24b3dcf08a98f334727e3f34f8eaab0bb660028d585e5665f52e6f8442184] <==
	E0130 22:03:58.868477       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:03:58.868567       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:03:58.871241       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:03:58.875330       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:03:58.876281       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0130 22:03:58.877393       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:03:58.878433       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:03:58.879748       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 22:03:58.882038       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:03:58.883524       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:04:17.326850       1 factory.go:585] pod is already present in the activeQ
	E0130 22:04:19.002553       1 scheduler.go:658] error binding pod: Operation cannot be fulfilled on pods/binding "coredns-5644d7b6d9-q2xnt": pod coredns-5644d7b6d9-q2xnt is being deleted, cannot be assigned to a host
	E0130 22:04:19.004162       1 factory.go:561] Error scheduling kube-system/coredns-5644d7b6d9-q2xnt: Operation cannot be fulfilled on pods/binding "coredns-5644d7b6d9-q2xnt": pod coredns-5644d7b6d9-q2xnt is being deleted, cannot be assigned to a host; retrying
	E0130 22:04:19.133984       1 scheduler.go:333] Error updating the condition of the pod kube-system/coredns-5644d7b6d9-q2xnt: Operation cannot be fulfilled on pods "coredns-5644d7b6d9-q2xnt": the object has been modified; please apply your changes to the latest version and try again
	I0130 22:14:52.780316       1 serving.go:319] Generated self-signed cert in-memory
	W0130 22:14:58.274824       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0130 22:14:58.274869       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:14:58.274879       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0130 22:14:58.274886       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0130 22:14:58.285579       1 server.go:143] Version: v1.16.0
	I0130 22:14:58.285807       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0130 22:14:58.297777       1 authorization.go:47] Authorization is disabled
	W0130 22:14:58.297900       1 authentication.go:79] Authentication is disabled
	I0130 22:14:58.297911       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0130 22:14:58.300494       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:14:23 UTC, ends at Tue 2024-01-30 22:32:45 UTC. --
	Jan 30 22:28:24 old-k8s-version-912992 kubelet[1020]: E0130 22:28:24.309905    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:28:39 old-k8s-version-912992 kubelet[1020]: E0130 22:28:39.309673    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:28:52 old-k8s-version-912992 kubelet[1020]: E0130 22:28:52.310834    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:29:06 old-k8s-version-912992 kubelet[1020]: E0130 22:29:06.310454    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:29:18 old-k8s-version-912992 kubelet[1020]: E0130 22:29:18.310945    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:29:29 old-k8s-version-912992 kubelet[1020]: E0130 22:29:29.309822    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:29:40 old-k8s-version-912992 kubelet[1020]: E0130 22:29:40.310016    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:29:50 old-k8s-version-912992 kubelet[1020]: E0130 22:29:50.371505    1020 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Jan 30 22:29:51 old-k8s-version-912992 kubelet[1020]: E0130 22:29:51.310323    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:30:03 old-k8s-version-912992 kubelet[1020]: E0130 22:30:03.309966    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:30:18 old-k8s-version-912992 kubelet[1020]: E0130 22:30:18.310083    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:30:33 old-k8s-version-912992 kubelet[1020]: E0130 22:30:33.311556    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:30:45 old-k8s-version-912992 kubelet[1020]: E0130 22:30:45.309910    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:30:56 old-k8s-version-912992 kubelet[1020]: E0130 22:30:56.310474    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:31:08 old-k8s-version-912992 kubelet[1020]: E0130 22:31:08.310744    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:31:19 old-k8s-version-912992 kubelet[1020]: E0130 22:31:19.324034    1020 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:31:19 old-k8s-version-912992 kubelet[1020]: E0130 22:31:19.324192    1020 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:31:19 old-k8s-version-912992 kubelet[1020]: E0130 22:31:19.324253    1020 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Jan 30 22:31:19 old-k8s-version-912992 kubelet[1020]: E0130 22:31:19.324283    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Jan 30 22:31:32 old-k8s-version-912992 kubelet[1020]: E0130 22:31:32.311987    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:31:45 old-k8s-version-912992 kubelet[1020]: E0130 22:31:45.310320    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:31:58 old-k8s-version-912992 kubelet[1020]: E0130 22:31:58.309979    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:32:11 old-k8s-version-912992 kubelet[1020]: E0130 22:32:11.309777    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:32:22 old-k8s-version-912992 kubelet[1020]: E0130 22:32:22.309693    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Jan 30 22:32:33 old-k8s-version-912992 kubelet[1020]: E0130 22:32:33.310218    1020 pod_workers.go:191] Error syncing pod a6e0dfa3-af30-4543-ae29-70ff582bc6ca ("metrics-server-74d5856cc6-w74c9_kube-system(a6e0dfa3-af30-4543-ae29-70ff582bc6ca)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	
	==> storage-provisioner [1b2c0e91a4312acb2afeb4cd9d4d0889a704785d1413ce953456993b1924ed38] <==
	I0130 22:15:31.621489       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:15:31.641489       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:15:31.641580       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:15:49.042563       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:15:49.044255       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a5a79f0-2c74-47af-97cc-5ecbad74ac28", APIVersion:"v1", ResourceVersion:"627", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248 became leader
	I0130 22:15:49.045868       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248!
	I0130 22:15:49.146207       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_f1f607de-4aaa-4be2-8149-e11afa9f5248!
	
	
	==> storage-provisioner [ddad721f8f253fc75b4482501539599b1c2748fa1ec5953f6420178dc1d8dc8f] <==
	I0130 22:04:20.966303       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:04:20.978788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:04:20.978909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:04:20.989883       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:04:20.990140       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71!
	I0130 22:04:20.993675       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9a5a79f0-2c74-47af-97cc-5ecbad74ac28", APIVersion:"v1", ResourceVersion:"382", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71 became leader
	I0130 22:04:21.091417       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-912992_882d780a-2baa-42ed-b644-7a8e7b488d71!
	E0130 22:05:51.635541       1 leaderelection.go:325] error retrieving resource lock kube-system/k8s.io-minikube-hostpath: Get "https://10.96.0.1:443/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath": dial tcp 10.96.0.1:443: connect: connection refused
	I0130 22:15:00.665983       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0130 22:15:30.669667       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-912992 -n old-k8s-version-912992
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-912992 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-w74c9
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9: exit status 1 (67.396888ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-w74c9" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-912992 describe pod metrics-server-74d5856cc6-w74c9: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (513.03s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0130 22:28:15.636080  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 22:29:25.157848  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:29:32.716999  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:31:52.587701  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-023824 -n no-preload-023824
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:33:32.734326603 +0000 UTC m=+5584.135268718
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-023824 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-023824 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.09µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-023824 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-023824 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-023824 logs -n 25: (1.415331748s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p stopped-upgrade-742001                              | stopped-upgrade-742001       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-822826                              | cert-expiration-822826       | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:03 UTC |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:03 UTC | 30 Jan 24 22:06 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:32 UTC |
	| start   | -p newest-cni-507807 --memory=2200 --alsologtostderr   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:32:47
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:32:47.836778  686214 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:32:47.836996  686214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:32:47.837009  686214 out.go:309] Setting ErrFile to fd 2...
	I0130 22:32:47.837014  686214 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:32:47.837255  686214 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:32:47.837978  686214 out.go:303] Setting JSON to false
	I0130 22:32:47.839206  686214 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11720,"bootTime":1706642248,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:32:47.839262  686214 start.go:138] virtualization: kvm guest
	I0130 22:32:47.842582  686214 out.go:177] * [newest-cni-507807] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:32:47.844294  686214 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:32:47.844279  686214 notify.go:220] Checking for updates...
	I0130 22:32:47.846098  686214 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:32:47.847879  686214 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:32:47.850175  686214 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:47.851625  686214 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:32:47.852933  686214 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:32:47.854620  686214 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:32:47.854735  686214 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:32:47.854840  686214 config.go:182] Loaded profile config "no-preload-023824": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:32:47.854971  686214 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:32:47.891638  686214 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 22:32:47.893377  686214 start.go:298] selected driver: kvm2
	I0130 22:32:47.893395  686214 start.go:902] validating driver "kvm2" against <nil>
	I0130 22:32:47.893405  686214 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:32:47.894214  686214 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:32:47.894304  686214 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:32:47.910530  686214 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:32:47.910568  686214 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0130 22:32:47.910587  686214 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0130 22:32:47.910780  686214 start_flags.go:946] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0130 22:32:47.910856  686214 cni.go:84] Creating CNI manager for ""
	I0130 22:32:47.910875  686214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:32:47.910914  686214 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 22:32:47.910926  686214 start_flags.go:321] config:
	{Name:newest-cni-507807 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:32:47.911139  686214 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:32:47.913265  686214 out.go:177] * Starting control plane node newest-cni-507807 in cluster newest-cni-507807
	I0130 22:32:47.914528  686214 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:32:47.914572  686214 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 22:32:47.914584  686214 cache.go:56] Caching tarball of preloaded images
	I0130 22:32:47.914668  686214 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:32:47.914684  686214 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 22:32:47.914799  686214 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json ...
	I0130 22:32:47.914830  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json: {Name:mk81570407f5d4996058025017b1e2b2861438ee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:32:47.915036  686214 start.go:365] acquiring machines lock for newest-cni-507807: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:32:47.915083  686214 start.go:369] acquired machines lock for "newest-cni-507807" in 25.573µs
	I0130 22:32:47.915101  686214 start.go:93] Provisioning new machine with config: &{Name:newest-cni-507807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:32:47.915208  686214 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 22:32:47.916952  686214 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0130 22:32:47.917110  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:32:47.917147  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:32:47.930978  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34593
	I0130 22:32:47.931454  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:32:47.932064  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:32:47.932094  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:32:47.932411  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:32:47.932596  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetMachineName
	I0130 22:32:47.932769  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:32:47.932937  686214 start.go:159] libmachine.API.Create for "newest-cni-507807" (driver="kvm2")
	I0130 22:32:47.932968  686214 client.go:168] LocalClient.Create starting
	I0130 22:32:47.933032  686214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem
	I0130 22:32:47.933070  686214 main.go:141] libmachine: Decoding PEM data...
	I0130 22:32:47.933093  686214 main.go:141] libmachine: Parsing certificate...
	I0130 22:32:47.933160  686214 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem
	I0130 22:32:47.933197  686214 main.go:141] libmachine: Decoding PEM data...
	I0130 22:32:47.933219  686214 main.go:141] libmachine: Parsing certificate...
	I0130 22:32:47.933245  686214 main.go:141] libmachine: Running pre-create checks...
	I0130 22:32:47.933260  686214 main.go:141] libmachine: (newest-cni-507807) Calling .PreCreateCheck
	I0130 22:32:47.933668  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetConfigRaw
	I0130 22:32:47.934088  686214 main.go:141] libmachine: Creating machine...
	I0130 22:32:47.934101  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Create
	I0130 22:32:47.934253  686214 main.go:141] libmachine: (newest-cni-507807) Creating KVM machine...
	I0130 22:32:47.935466  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found existing default KVM network
	I0130 22:32:47.937201  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:47.936990  686237 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00010f830}
	I0130 22:32:47.942373  686214 main.go:141] libmachine: (newest-cni-507807) DBG | trying to create private KVM network mk-newest-cni-507807 192.168.39.0/24...
	I0130 22:32:48.015573  686214 main.go:141] libmachine: (newest-cni-507807) DBG | private KVM network mk-newest-cni-507807 192.168.39.0/24 created
	I0130 22:32:48.015616  686214 main.go:141] libmachine: (newest-cni-507807) Setting up store path in /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 ...
	I0130 22:32:48.015639  686214 main.go:141] libmachine: (newest-cni-507807) Building disk image from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 22:32:48.015701  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.015644  686237 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:48.015869  686214 main.go:141] libmachine: (newest-cni-507807) Downloading /home/jenkins/minikube-integration/18014-640473/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 22:32:48.266855  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.266710  686237 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa...
	I0130 22:32:48.333982  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.333861  686237 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/newest-cni-507807.rawdisk...
	I0130 22:32:48.334014  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Writing magic tar header
	I0130 22:32:48.334035  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Writing SSH key tar header
	I0130 22:32:48.334195  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:48.334078  686237 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 ...
	I0130 22:32:48.334231  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807
	I0130 22:32:48.334275  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807 (perms=drwx------)
	I0130 22:32:48.334307  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines (perms=drwxr-xr-x)
	I0130 22:32:48.334321  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines
	I0130 22:32:48.334339  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube (perms=drwxr-xr-x)
	I0130 22:32:48.334354  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473 (perms=drwxrwxr-x)
	I0130 22:32:48.334365  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:32:48.334377  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473
	I0130 22:32:48.334387  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 22:32:48.334403  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 22:32:48.334420  686214 main.go:141] libmachine: (newest-cni-507807) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 22:32:48.334446  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home/jenkins
	I0130 22:32:48.334458  686214 main.go:141] libmachine: (newest-cni-507807) Creating domain...
	I0130 22:32:48.334475  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Checking permissions on dir: /home
	I0130 22:32:48.334488  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Skipping /home - not owner
	I0130 22:32:48.335816  686214 main.go:141] libmachine: (newest-cni-507807) define libvirt domain using xml: 
	I0130 22:32:48.335843  686214 main.go:141] libmachine: (newest-cni-507807) <domain type='kvm'>
	I0130 22:32:48.335854  686214 main.go:141] libmachine: (newest-cni-507807)   <name>newest-cni-507807</name>
	I0130 22:32:48.335885  686214 main.go:141] libmachine: (newest-cni-507807)   <memory unit='MiB'>2200</memory>
	I0130 22:32:48.335901  686214 main.go:141] libmachine: (newest-cni-507807)   <vcpu>2</vcpu>
	I0130 22:32:48.335909  686214 main.go:141] libmachine: (newest-cni-507807)   <features>
	I0130 22:32:48.335919  686214 main.go:141] libmachine: (newest-cni-507807)     <acpi/>
	I0130 22:32:48.335931  686214 main.go:141] libmachine: (newest-cni-507807)     <apic/>
	I0130 22:32:48.335940  686214 main.go:141] libmachine: (newest-cni-507807)     <pae/>
	I0130 22:32:48.335953  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.335963  686214 main.go:141] libmachine: (newest-cni-507807)   </features>
	I0130 22:32:48.335974  686214 main.go:141] libmachine: (newest-cni-507807)   <cpu mode='host-passthrough'>
	I0130 22:32:48.335985  686214 main.go:141] libmachine: (newest-cni-507807)   
	I0130 22:32:48.335993  686214 main.go:141] libmachine: (newest-cni-507807)   </cpu>
	I0130 22:32:48.336004  686214 main.go:141] libmachine: (newest-cni-507807)   <os>
	I0130 22:32:48.336018  686214 main.go:141] libmachine: (newest-cni-507807)     <type>hvm</type>
	I0130 22:32:48.336032  686214 main.go:141] libmachine: (newest-cni-507807)     <boot dev='cdrom'/>
	I0130 22:32:48.336044  686214 main.go:141] libmachine: (newest-cni-507807)     <boot dev='hd'/>
	I0130 22:32:48.336057  686214 main.go:141] libmachine: (newest-cni-507807)     <bootmenu enable='no'/>
	I0130 22:32:48.336066  686214 main.go:141] libmachine: (newest-cni-507807)   </os>
	I0130 22:32:48.336074  686214 main.go:141] libmachine: (newest-cni-507807)   <devices>
	I0130 22:32:48.336096  686214 main.go:141] libmachine: (newest-cni-507807)     <disk type='file' device='cdrom'>
	I0130 22:32:48.336116  686214 main.go:141] libmachine: (newest-cni-507807)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/boot2docker.iso'/>
	I0130 22:32:48.336129  686214 main.go:141] libmachine: (newest-cni-507807)       <target dev='hdc' bus='scsi'/>
	I0130 22:32:48.336142  686214 main.go:141] libmachine: (newest-cni-507807)       <readonly/>
	I0130 22:32:48.336151  686214 main.go:141] libmachine: (newest-cni-507807)     </disk>
	I0130 22:32:48.336164  686214 main.go:141] libmachine: (newest-cni-507807)     <disk type='file' device='disk'>
	I0130 22:32:48.336179  686214 main.go:141] libmachine: (newest-cni-507807)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 22:32:48.336197  686214 main.go:141] libmachine: (newest-cni-507807)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/newest-cni-507807.rawdisk'/>
	I0130 22:32:48.336210  686214 main.go:141] libmachine: (newest-cni-507807)       <target dev='hda' bus='virtio'/>
	I0130 22:32:48.336222  686214 main.go:141] libmachine: (newest-cni-507807)     </disk>
	I0130 22:32:48.336234  686214 main.go:141] libmachine: (newest-cni-507807)     <interface type='network'>
	I0130 22:32:48.336254  686214 main.go:141] libmachine: (newest-cni-507807)       <source network='mk-newest-cni-507807'/>
	I0130 22:32:48.336266  686214 main.go:141] libmachine: (newest-cni-507807)       <model type='virtio'/>
	I0130 22:32:48.336277  686214 main.go:141] libmachine: (newest-cni-507807)     </interface>
	I0130 22:32:48.336289  686214 main.go:141] libmachine: (newest-cni-507807)     <interface type='network'>
	I0130 22:32:48.336303  686214 main.go:141] libmachine: (newest-cni-507807)       <source network='default'/>
	I0130 22:32:48.336315  686214 main.go:141] libmachine: (newest-cni-507807)       <model type='virtio'/>
	I0130 22:32:48.336325  686214 main.go:141] libmachine: (newest-cni-507807)     </interface>
	I0130 22:32:48.336342  686214 main.go:141] libmachine: (newest-cni-507807)     <serial type='pty'>
	I0130 22:32:48.336354  686214 main.go:141] libmachine: (newest-cni-507807)       <target port='0'/>
	I0130 22:32:48.336367  686214 main.go:141] libmachine: (newest-cni-507807)     </serial>
	I0130 22:32:48.336378  686214 main.go:141] libmachine: (newest-cni-507807)     <console type='pty'>
	I0130 22:32:48.336387  686214 main.go:141] libmachine: (newest-cni-507807)       <target type='serial' port='0'/>
	I0130 22:32:48.336397  686214 main.go:141] libmachine: (newest-cni-507807)     </console>
	I0130 22:32:48.336410  686214 main.go:141] libmachine: (newest-cni-507807)     <rng model='virtio'>
	I0130 22:32:48.336425  686214 main.go:141] libmachine: (newest-cni-507807)       <backend model='random'>/dev/random</backend>
	I0130 22:32:48.336436  686214 main.go:141] libmachine: (newest-cni-507807)     </rng>
	I0130 22:32:48.336449  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.336460  686214 main.go:141] libmachine: (newest-cni-507807)     
	I0130 22:32:48.336473  686214 main.go:141] libmachine: (newest-cni-507807)   </devices>
	I0130 22:32:48.336485  686214 main.go:141] libmachine: (newest-cni-507807) </domain>
	I0130 22:32:48.336497  686214 main.go:141] libmachine: (newest-cni-507807) 
	I0130 22:32:48.341375  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:82:2f:f1 in network default
	I0130 22:32:48.342016  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring networks are active...
	I0130 22:32:48.342046  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:48.342816  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring network default is active
	I0130 22:32:48.343202  686214 main.go:141] libmachine: (newest-cni-507807) Ensuring network mk-newest-cni-507807 is active
	I0130 22:32:48.343874  686214 main.go:141] libmachine: (newest-cni-507807) Getting domain xml...
	I0130 22:32:48.344671  686214 main.go:141] libmachine: (newest-cni-507807) Creating domain...
	I0130 22:32:49.590569  686214 main.go:141] libmachine: (newest-cni-507807) Waiting to get IP...
	I0130 22:32:49.591323  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:49.591918  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:49.591994  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:49.591892  686237 retry.go:31] will retry after 229.507483ms: waiting for machine to come up
	I0130 22:32:49.823510  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:49.824065  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:49.824098  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:49.824001  686237 retry.go:31] will retry after 334.851564ms: waiting for machine to come up
	I0130 22:32:50.160597  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.161061  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.161098  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.161009  686237 retry.go:31] will retry after 436.519923ms: waiting for machine to come up
	I0130 22:32:50.599599  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.600200  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.600239  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.600111  686237 retry.go:31] will retry after 381.704989ms: waiting for machine to come up
	I0130 22:32:50.983895  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:50.984572  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:50.984608  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:50.984495  686237 retry.go:31] will retry after 501.7142ms: waiting for machine to come up
	I0130 22:32:51.488171  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:51.488619  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:51.488646  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:51.488586  686237 retry.go:31] will retry after 703.569138ms: waiting for machine to come up
	I0130 22:32:52.193577  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:52.194510  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:52.194534  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:52.194453  686237 retry.go:31] will retry after 885.583889ms: waiting for machine to come up
	I0130 22:32:53.082178  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:53.082636  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:53.082668  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:53.082582  686237 retry.go:31] will retry after 1.389780595s: waiting for machine to come up
	I0130 22:32:54.474383  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:54.474903  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:54.474939  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:54.474839  686237 retry.go:31] will retry after 1.584665962s: waiting for machine to come up
	I0130 22:32:56.061266  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:56.061758  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:56.061783  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:56.061710  686237 retry.go:31] will retry after 2.068215782s: waiting for machine to come up
	I0130 22:32:58.132113  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:32:58.132611  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:32:58.132636  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:32:58.132548  686237 retry.go:31] will retry after 2.48238431s: waiting for machine to come up
	I0130 22:33:00.618332  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:00.618753  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:00.618782  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:00.618701  686237 retry.go:31] will retry after 2.512763919s: waiting for machine to come up
	I0130 22:33:03.133026  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:03.133425  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:03.133454  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:03.133357  686237 retry.go:31] will retry after 4.117036665s: waiting for machine to come up
	I0130 22:33:07.254595  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:07.255049  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:33:07.255077  686214 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:33:07.254995  686237 retry.go:31] will retry after 3.671927151s: waiting for machine to come up
	I0130 22:33:10.928658  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:10.929178  686214 main.go:141] libmachine: (newest-cni-507807) Found IP for machine: 192.168.39.100
	I0130 22:33:10.929211  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has current primary IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:10.929222  686214 main.go:141] libmachine: (newest-cni-507807) Reserving static IP address...
	I0130 22:33:10.929721  686214 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find host DHCP lease matching {name: "newest-cni-507807", mac: "52:54:00:65:8c:48", ip: "192.168.39.100"} in network mk-newest-cni-507807
	I0130 22:33:11.007452  686214 main.go:141] libmachine: (newest-cni-507807) Reserved static IP address: 192.168.39.100
	I0130 22:33:11.007486  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Getting to WaitForSSH function...
	I0130 22:33:11.007497  686214 main.go:141] libmachine: (newest-cni-507807) Waiting for SSH to be available...
	I0130 22:33:11.010786  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.011349  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.011379  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.011593  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Using SSH client type: external
	I0130 22:33:11.011615  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa (-rw-------)
	I0130 22:33:11.011654  686214 main.go:141] libmachine: (newest-cni-507807) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:33:11.011673  686214 main.go:141] libmachine: (newest-cni-507807) DBG | About to run SSH command:
	I0130 22:33:11.011688  686214 main.go:141] libmachine: (newest-cni-507807) DBG | exit 0
	I0130 22:33:11.117819  686214 main.go:141] libmachine: (newest-cni-507807) DBG | SSH cmd err, output: <nil>: 
	I0130 22:33:11.118090  686214 main.go:141] libmachine: (newest-cni-507807) KVM machine creation complete!
	I0130 22:33:11.118577  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetConfigRaw
	I0130 22:33:11.119308  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:11.119565  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:11.119826  686214 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 22:33:11.119849  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetState
	I0130 22:33:11.121481  686214 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 22:33:11.121532  686214 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 22:33:11.121546  686214 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 22:33:11.121561  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.124188  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.124604  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.124634  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.124808  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:11.125022  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.125206  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.125335  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:11.125515  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:11.126062  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:11.126083  686214 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 22:33:11.260836  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:33:11.260862  686214 main.go:141] libmachine: Detecting the provisioner...
	I0130 22:33:11.260871  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.263664  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.263985  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.264012  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.264168  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:11.264637  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.264856  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.265034  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:11.265221  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:11.265657  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:11.265673  686214 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 22:33:11.394559  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 22:33:11.394624  686214 main.go:141] libmachine: found compatible host: buildroot
	I0130 22:33:11.394638  686214 main.go:141] libmachine: Provisioning with buildroot...
	I0130 22:33:11.394653  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetMachineName
	I0130 22:33:11.394916  686214 buildroot.go:166] provisioning hostname "newest-cni-507807"
	I0130 22:33:11.394944  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetMachineName
	I0130 22:33:11.395150  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.398276  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.398650  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.398686  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.398817  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:11.399002  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.399167  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.399332  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:11.399526  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:11.399830  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:11.399843  686214 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-507807 && echo "newest-cni-507807" | sudo tee /etc/hostname
	I0130 22:33:11.546368  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-507807
	
	I0130 22:33:11.546396  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.549440  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.549864  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.549893  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.550091  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:11.550303  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.550496  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.550709  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:11.550894  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:11.551212  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:11.551234  686214 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-507807' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-507807/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-507807' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:33:11.690349  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:33:11.690395  686214 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:33:11.690439  686214 buildroot.go:174] setting up certificates
	I0130 22:33:11.690450  686214 provision.go:83] configureAuth start
	I0130 22:33:11.690468  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetMachineName
	I0130 22:33:11.690789  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetIP
	I0130 22:33:11.693735  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.694156  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.694188  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.694495  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.696790  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.697182  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.697207  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.697441  686214 provision.go:138] copyHostCerts
	I0130 22:33:11.697522  686214 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:33:11.697537  686214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:33:11.697619  686214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:33:11.697758  686214 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:33:11.697771  686214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:33:11.697808  686214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:33:11.697904  686214 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:33:11.697914  686214 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:33:11.697950  686214 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:33:11.698035  686214 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.newest-cni-507807 san=[192.168.39.100 192.168.39.100 localhost 127.0.0.1 minikube newest-cni-507807]
	I0130 22:33:11.971428  686214 provision.go:172] copyRemoteCerts
	I0130 22:33:11.971492  686214 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:33:11.971530  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:11.974473  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.974937  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:11.974980  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:11.975121  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:11.975317  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:11.975432  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:11.975568  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:12.072148  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:33:12.095560  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0130 22:33:12.118482  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:33:12.140369  686214 provision.go:86] duration metric: configureAuth took 449.906012ms
	I0130 22:33:12.140402  686214 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:33:12.140604  686214 config.go:182] Loaded profile config "newest-cni-507807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:33:12.140704  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:12.143434  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.143908  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.143933  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.144136  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:12.144355  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.144589  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.144753  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:12.144932  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:12.145275  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:12.145291  686214 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:33:12.506643  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:33:12.506682  686214 main.go:141] libmachine: Checking connection to Docker...
	I0130 22:33:12.506695  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetURL
	I0130 22:33:12.508229  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Using libvirt version 6000000
	I0130 22:33:12.511020  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.511443  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.511465  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.511752  686214 main.go:141] libmachine: Docker is up and running!
	I0130 22:33:12.511766  686214 main.go:141] libmachine: Reticulating splines...
	I0130 22:33:12.511774  686214 client.go:171] LocalClient.Create took 24.578796366s
	I0130 22:33:12.511801  686214 start.go:167] duration metric: libmachine.API.Create for "newest-cni-507807" took 24.578865596s
	I0130 22:33:12.511813  686214 start.go:300] post-start starting for "newest-cni-507807" (driver="kvm2")
	I0130 22:33:12.511829  686214 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:33:12.511852  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:12.512104  686214 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:33:12.512134  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:12.514841  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.515202  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.515231  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.515375  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:12.515563  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.515714  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:12.515848  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:12.611754  686214 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:33:12.616585  686214 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:33:12.616615  686214 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:33:12.616697  686214 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:33:12.616799  686214 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:33:12.616930  686214 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:33:12.626935  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:33:12.649227  686214 start.go:303] post-start completed in 137.401792ms
	I0130 22:33:12.649275  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetConfigRaw
	I0130 22:33:12.650039  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetIP
	I0130 22:33:12.652887  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.653320  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.653348  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.653686  686214 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json ...
	I0130 22:33:12.653901  686214 start.go:128] duration metric: createHost completed in 24.738680153s
	I0130 22:33:12.653984  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:12.656500  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.656971  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.657030  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.657281  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:12.657451  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.657666  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.657861  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:12.658056  686214 main.go:141] libmachine: Using SSH client type: native
	I0130 22:33:12.658523  686214 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.39.100 22 <nil> <nil>}
	I0130 22:33:12.658541  686214 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:33:12.802472  686214 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706653992.781626167
	
	I0130 22:33:12.802490  686214 fix.go:206] guest clock: 1706653992.781626167
	I0130 22:33:12.802497  686214 fix.go:219] Guest: 2024-01-30 22:33:12.781626167 +0000 UTC Remote: 2024-01-30 22:33:12.653958461 +0000 UTC m=+24.871446162 (delta=127.667706ms)
	I0130 22:33:12.802518  686214 fix.go:190] guest clock delta is within tolerance: 127.667706ms
	I0130 22:33:12.802525  686214 start.go:83] releasing machines lock for "newest-cni-507807", held for 24.887434744s
	I0130 22:33:12.802577  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:12.802823  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetIP
	I0130 22:33:12.805651  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.806056  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.806083  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.806256  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:12.806751  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:12.806935  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:12.807057  686214 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:33:12.807089  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:12.807367  686214 ssh_runner.go:195] Run: cat /version.json
	I0130 22:33:12.807393  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:12.809868  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.810231  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.810514  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.810561  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.810589  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:12.810603  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:12.810746  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:12.810898  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:12.810949  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.811120  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:12.811123  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:12.811307  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:12.811378  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:12.811553  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:12.902578  686214 ssh_runner.go:195] Run: systemctl --version
	I0130 22:33:12.935318  686214 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:33:13.109432  686214 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:33:13.117419  686214 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:33:13.117557  686214 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:33:13.134781  686214 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:33:13.134808  686214 start.go:475] detecting cgroup driver to use...
	I0130 22:33:13.134891  686214 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:33:13.151752  686214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:33:13.163361  686214 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:33:13.163434  686214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:33:13.175771  686214 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:33:13.188553  686214 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:33:13.296625  686214 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:33:13.421321  686214 docker.go:233] disabling docker service ...
	I0130 22:33:13.421407  686214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:33:13.434567  686214 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:33:13.446586  686214 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:33:13.560373  686214 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:33:13.675322  686214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:33:13.687998  686214 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:33:13.706479  686214 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:33:13.706553  686214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:33:13.715627  686214 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:33:13.715705  686214 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:33:13.725130  686214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:33:13.734389  686214 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:33:13.743273  686214 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:33:13.752685  686214 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:33:13.760836  686214 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:33:13.760888  686214 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:33:13.773012  686214 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:33:13.781961  686214 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:33:13.895374  686214 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:33:14.065800  686214 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:33:14.065895  686214 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:33:14.071305  686214 start.go:543] Will wait 60s for crictl version
	I0130 22:33:14.071382  686214 ssh_runner.go:195] Run: which crictl
	I0130 22:33:14.076129  686214 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:33:14.119732  686214 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:33:14.119818  686214 ssh_runner.go:195] Run: crio --version
	I0130 22:33:14.168781  686214 ssh_runner.go:195] Run: crio --version
	I0130 22:33:14.223686  686214 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.24.1 ...
	I0130 22:33:14.224975  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetIP
	I0130 22:33:14.227986  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:14.228373  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:14.228405  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:14.228696  686214 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0130 22:33:14.232561  686214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:33:14.245749  686214 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0130 22:33:14.247022  686214 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:33:14.247097  686214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:33:14.283054  686214 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0130 22:33:14.283126  686214 ssh_runner.go:195] Run: which lz4
	I0130 22:33:14.286961  686214 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:33:14.291121  686214 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:33:14.291151  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (401853962 bytes)
	I0130 22:33:15.941810  686214 crio.go:444] Took 1.654876 seconds to copy over tarball
	I0130 22:33:15.941890  686214 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:33:18.836549  686214 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.894624152s)
	I0130 22:33:18.836580  686214 crio.go:451] Took 2.894743 seconds to extract the tarball
	I0130 22:33:18.836593  686214 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:33:18.876055  686214 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:33:18.955635  686214 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:33:18.955661  686214 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:33:18.955728  686214 ssh_runner.go:195] Run: crio config
	I0130 22:33:19.023949  686214 cni.go:84] Creating CNI manager for ""
	I0130 22:33:19.023974  686214 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:33:19.023998  686214 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0130 22:33:19.024018  686214 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.39.100 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-507807 NodeName:newest-cni-507807 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureAr
gs:map[] NodeIP:192.168.39.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:33:19.024221  686214 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-507807"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:33:19.024324  686214 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-507807 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:33:19.024400  686214 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0130 22:33:19.035812  686214 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:33:19.035896  686214 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:33:19.046851  686214 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (419 bytes)
	I0130 22:33:19.063738  686214 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0130 22:33:19.081624  686214 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I0130 22:33:19.098092  686214 ssh_runner.go:195] Run: grep 192.168.39.100	control-plane.minikube.internal$ /etc/hosts
	I0130 22:33:19.101912  686214 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:33:19.113561  686214 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807 for IP: 192.168.39.100
	I0130 22:33:19.113625  686214 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:19.113806  686214 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:33:19.113874  686214 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:33:19.113987  686214 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.key
	I0130 22:33:19.114005  686214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.crt with IP's: []
	I0130 22:33:19.522662  686214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.crt ...
	I0130 22:33:19.522700  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.crt: {Name:mk9b47df82ee5904cdbe45226e3dc80641f78e24 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:19.561071  686214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.key ...
	I0130 22:33:19.561107  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/client.key: {Name:mk093f083d610c6e7aaeca3c60c63e85873dfcec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:19.561247  686214 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key.3c12ef50
	I0130 22:33:19.561266  686214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt.3c12ef50 with IP's: [192.168.39.100 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 22:33:20.059072  686214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt.3c12ef50 ...
	I0130 22:33:20.059110  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt.3c12ef50: {Name:mk742c901761ca0efec3f1b549806fe3561b3566 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:20.059312  686214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key.3c12ef50 ...
	I0130 22:33:20.059343  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key.3c12ef50: {Name:mk272b6c90db4d4d3f2be5c89ec01774b1c03c48 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:20.059458  686214 certs.go:337] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt.3c12ef50 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt
	I0130 22:33:20.059598  686214 certs.go:341] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key.3c12ef50 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key
	I0130 22:33:20.059675  686214 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.key
	I0130 22:33:20.059696  686214 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.crt with IP's: []
	I0130 22:33:20.165833  686214 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.crt ...
	I0130 22:33:20.165863  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.crt: {Name:mkb545b1eedd944f89ec5081d0cd49ec2e5c9e6a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:20.207399  686214 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.key ...
	I0130 22:33:20.207434  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.key: {Name:mk683692c6438a5011cfd165af0bdc20b455d613 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:20.207736  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:33:20.207797  686214 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:33:20.207815  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:33:20.207856  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:33:20.207892  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:33:20.207926  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:33:20.207981  686214 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:33:20.208869  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:33:20.235527  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0130 22:33:20.259442  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:33:20.282403  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0130 22:33:20.367118  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:33:20.390712  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:33:20.414582  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:33:20.437788  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:33:20.460676  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:33:20.482698  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:33:20.506791  686214 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:33:20.530522  686214 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:33:20.547595  686214 ssh_runner.go:195] Run: openssl version
	I0130 22:33:20.553379  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:33:20.564412  686214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:33:20.568962  686214 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:33:20.569017  686214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:33:20.576022  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:33:20.587405  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:33:20.598536  686214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:33:20.603554  686214 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:33:20.603608  686214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:33:20.609505  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:33:20.619530  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:33:20.631184  686214 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:33:20.635696  686214 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:33:20.635748  686214 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:33:20.641205  686214 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:33:20.651830  686214 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:33:20.655906  686214 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 22:33:20.655999  686214 kubeadm.go:404] StartCluster: {Name:newest-cni-507807 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.29.0-rc.2 ClusterName:newest-cni-507807 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jen
kins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:33:20.656087  686214 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:33:20.656142  686214 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:33:20.700664  686214 cri.go:89] found id: ""
	I0130 22:33:20.700751  686214 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:33:20.710819  686214 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:33:20.721544  686214 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:33:20.731720  686214 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:33:20.731783  686214 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:33:20.856383  686214 kubeadm.go:322] [init] Using Kubernetes version: v1.29.0-rc.2
	I0130 22:33:20.856453  686214 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:33:21.115826  686214 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:33:21.115983  686214 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:33:21.116112  686214 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:33:21.356223  686214 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:33:21.362945  686214 out.go:204]   - Generating certificates and keys ...
	I0130 22:33:21.363105  686214 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:33:21.363197  686214 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:33:21.689541  686214 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 22:33:21.826440  686214 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 22:33:21.944058  686214 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 22:33:22.089959  686214 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 22:33:22.413756  686214 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 22:33:22.413915  686214 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-507807] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0130 22:33:22.471643  686214 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 22:33:22.471830  686214 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-507807] and IPs [192.168.39.100 127.0.0.1 ::1]
	I0130 22:33:22.704282  686214 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 22:33:22.975340  686214 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 22:33:23.367040  686214 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 22:33:23.367396  686214 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:33:23.438482  686214 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:33:23.542910  686214 kubeadm.go:322] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0130 22:33:23.908803  686214 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:33:24.037398  686214 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:33:24.148686  686214 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:33:24.149874  686214 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:33:24.153406  686214 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:33:24.155198  686214 out.go:204]   - Booting up control plane ...
	I0130 22:33:24.155314  686214 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:33:24.157457  686214 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:33:24.158370  686214 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:33:24.179077  686214 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:33:24.179201  686214 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:33:24.179291  686214 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:33:24.316716  686214 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:33:32.319020  686214 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004118 seconds
	I0130 22:33:32.339538  686214 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0130 22:33:32.361302  686214 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:13:18 UTC, ends at Tue 2024-01-30 22:33:33 UTC. --
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.477354651Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654013477342019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=a4f8e8b9-23cf-44f8-bd5a-a9bb62aa7c81 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.478261589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=949ac688-def0-4e55-875b-970f545dc88b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.478305988Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=949ac688-def0-4e55-875b-970f545dc88b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.478493346Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=949ac688-def0-4e55-875b-970f545dc88b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.530456591Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=967e7617-e80f-4dc2-8513-f10681202b30 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.530549057Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=967e7617-e80f-4dc2-8513-f10681202b30 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.531631044Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=ce0fb74b-4dcd-4931-9c8d-2410284e28a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.532197788Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654013532183028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=ce0fb74b-4dcd-4931-9c8d-2410284e28a8 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.533117437Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ddeef382-f31b-4a79-99e7-137178fec9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.533196544Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ddeef382-f31b-4a79-99e7-137178fec9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.533423791Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ddeef382-f31b-4a79-99e7-137178fec9f9 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.576004643Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=aa200385-ad7f-4662-90fc-7d2ffb5aea0d name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.576157028Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=aa200385-ad7f-4662-90fc-7d2ffb5aea0d name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.577473278Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=f41dcee4-f5cf-4a27-9a6c-08b9194d0b64 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.578581786Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654013578560223,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=f41dcee4-f5cf-4a27-9a6c-08b9194d0b64 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.579512391Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3035fa35-1e34-43fd-b1b5-8a103eba20c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.579593177Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3035fa35-1e34-43fd-b1b5-8a103eba20c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.579957038Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3035fa35-1e34-43fd-b1b5-8a103eba20c1 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.620871825Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=cbbb73dc-566a-4e92-94f0-e72b1f43f11d name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.620990109Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=cbbb73dc-566a-4e92-94f0-e72b1f43f11d name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.622366293Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fd32c976-6007-48d3-806d-38884d3e141b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.622738695Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654013622724243,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97830,},InodesUsed:&UInt64Value{Value:49,},},},}" file="go-grpc-middleware/chain.go:25" id=fd32c976-6007-48d3-806d-38884d3e141b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.623862907Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2b6e9aa4-0a14-44bf-98b2-d792aa62aff3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.623937196Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2b6e9aa4-0a14-44bf-98b2-d792aa62aff3 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:33 no-preload-023824 crio[710]: time="2024-01-30 22:33:33.624128752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082,PodSandboxId:2ee105736d6a278e92c2c4780f713ec84115a6ea4c60c359f105c392e1133201,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1706653140809459135,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e9fb2b13-124f-427c-875c-ee1ea1178907,},Annotations:map[string]string{io.kubernetes.container.hash: 3d9b75e5,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46,PodSandboxId:f21c5be3455f4ed541d2e8f375827449e700f733267537b190a82b3bf51b572e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:eba727be6ca4938cb3deec2eb6b7767e33995b2083144b23a8bbd138515b0dac,State:CONTAINER_RUNNING,CreatedAt:1706653140673546043,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8rn6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97ee699b-fd5f-4a47-b858-5b202d1e9384,},Annotations:map[string]string{io.kubernetes.container.hash: af9ddc11,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessage
Path: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b,PodSandboxId:7514cc9f6e7b2bb99e41ffa7248e742b13fbc7d2cb069a3767c64e5cfe4967ad,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139919316385,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-znj8f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 985cd51e-1832-487e-af5b-6a29108fc494,},Annotations:map[string]string{io.kubernetes.container.hash: a5b39eb2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-t
cp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702,PodSandboxId:dba9fe5afbe2d757828a325002aa0151319c9e3ab2e53a976e99414bea9542a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5a532505a3ed89827ff6d357b23a8eb2e7b6ad25e4cfd2d46bcedbf22b812e58,State:CONTAINER_RUNNING,CreatedAt:1706653139709859380,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-rktrb,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5470bf8-982d-4707-8
cd8-c0c0228219fa,},Annotations:map[string]string{io.kubernetes.container.hash: b995891f,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a,PodSandboxId:30b263bb4490b4b0e614559457e4ca2b7f0d9a53a3e840541cc20a58e0d2b39c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:a9aa934e24e72ecc1829948966f500cf16b2faa6b63de15f1b6de03cc074812f,State:CONTAINER_RUNNING,CreatedAt:170665311713
4066027,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 604ca0fe424ef8aca193b8f29827fac1,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9,PodSandboxId:55ae03c8d1cc0fc2e69b0ed9c42b7693e6902fc55df959ee9fe00067267a62bc,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:061a72677dc6dc85cdb47cf61f4453c9be173c19c3d48e04b1d7a25f9b405fe7,State:CONTAINER_RUNNING,CreatedAt:1706653117097866601,Labels:map[string]strin
g{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7bc1fdf64040bcde5c69fa9202b40e1a,},Annotations:map[string]string{io.kubernetes.container.hash: f02f90d4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8,PodSandboxId:1e4e292748c05b12b63873e31ea388eb89c10c9131184daf9eac871b99a155d8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:a5352bf791d1f99a5175e68039fe9bf100ca3dfd2400901a3a5e9bf0c8b03203,State:CONTAINER_RUNNING,CreatedAt:1706653116709627845,Labels:map[string]string{io.kubernet
es.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0e9683a8229c0ddfd9d2b4f98700fe81,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3,PodSandboxId:d0d9d3e1f76a8f570082e38fbde0473a18ea5e0fee70c4d2a482dcbec8cf719b,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:1d6d10016794014c57a966c3c40b351e068202efa97e3fec2d3387198e0cbad0,State:CONTAINER_RUNNING,CreatedAt:1706653116604364196,Labels:map[string]string{io.k
ubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-023824,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 05fe72f0b32ea68e0f89c1642a7c70f5,},Annotations:map[string]string{io.kubernetes.container.hash: d218a7d8,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2b6e9aa4-0a14-44bf-98b2-d792aa62aff3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	e38cae605fb7b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   2ee105736d6a2       storage-provisioner
	a3c418d415d66       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   14 minutes ago      Running             kube-proxy                0                   f21c5be3455f4       kube-proxy-8rn6v
	9966c08a886d0       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   7514cc9f6e7b2       coredns-76f75df574-znj8f
	5a605eb28b73f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   14 minutes ago      Running             coredns                   0                   dba9fe5afbe2d       coredns-76f75df574-rktrb
	725f7cb519d6c       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   14 minutes ago      Running             kube-scheduler            2                   30b263bb4490b       kube-scheduler-no-preload-023824
	0319bb836f3b9       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   14 minutes ago      Running             etcd                      2                   55ae03c8d1cc0       etcd-no-preload-023824
	9c7ed3f938b75       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   14 minutes ago      Running             kube-controller-manager   2                   1e4e292748c05       kube-controller-manager-no-preload-023824
	fc1282976c3bf       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   14 minutes ago      Running             kube-apiserver            2                   d0d9d3e1f76a8       kube-apiserver-no-preload-023824
	
	
	==> coredns [5a605eb28b73fe5459360e776ec47d79777ee35da42620204a1797ae5388a702] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:44861 - 51576 "HINFO IN 6624304217867511352.7498625084823510745. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.034469514s
	
	
	==> coredns [9966c08a886d0d3a1126a5f2887e82e7d3d6df6e52c7420df33c501f35601d6b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57767 - 45205 "HINFO IN 3851438653245790492.3560899168353838747. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024764723s
	
	
	==> describe nodes <==
	Name:               no-preload-023824
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-023824
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=no-preload-023824
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_18_44_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:18:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-023824
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:33:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:29:18 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:29:18 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:29:18 +0000   Tue, 30 Jan 2024 22:18:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:29:18 +0000   Tue, 30 Jan 2024 22:18:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.232
	  Hostname:    no-preload-023824
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 2211a4f4aa3d427eb950d566eb36f14d
	  System UUID:                2211a4f4-aa3d-427e-b950-d566eb36f14d
	  Boot ID:                    fd69ba0b-2106-47cf-bc46-c0af7535ee48
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-rktrb                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-76f75df574-znj8f                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-no-preload-023824                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kube-apiserver-no-preload-023824             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-no-preload-023824    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8rn6v                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-no-preload-023824             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 metrics-server-57f55c9bc5-nvplb              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node no-preload-023824 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node no-preload-023824 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node no-preload-023824 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             14m   kubelet          Node no-preload-023824 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node no-preload-023824 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node no-preload-023824 event: Registered Node no-preload-023824 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.067703] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.333805] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.359432] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.147710] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.350380] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.406055] systemd-fstab-generator[636]: Ignoring "noauto" for root device
	[  +0.113337] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.163267] systemd-fstab-generator[660]: Ignoring "noauto" for root device
	[  +0.117992] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.207025] systemd-fstab-generator[695]: Ignoring "noauto" for root device
	[ +28.802704] systemd-fstab-generator[1325]: Ignoring "noauto" for root device
	[Jan30 22:14] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:18] systemd-fstab-generator[3900]: Ignoring "noauto" for root device
	[  +9.298514] systemd-fstab-generator[4232]: Ignoring "noauto" for root device
	[ +13.237525] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [0319bb836f3b931e6b82f7767e6ab061062f691b50320656cdc5eb12734faeb9] <==
	{"level":"info","ts":"2024-01-30T22:18:38.913968Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.232:2380"}
	{"level":"info","ts":"2024-01-30T22:18:38.915707Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T22:18:38.915638Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f98cde10e2754c8e","initial-advertise-peer-urls":["https://192.168.61.232:2380"],"listen-peer-urls":["https://192.168.61.232:2380"],"advertise-client-urls":["https://192.168.61.232:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.232:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-01-30T22:18:39.080472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080663Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080731Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e received MsgPreVoteResp from f98cde10e2754c8e at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:39.080863Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.080933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e received MsgVoteResp from f98cde10e2754c8e at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.080975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f98cde10e2754c8e became leader at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.081049Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f98cde10e2754c8e elected leader f98cde10e2754c8e at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:39.083465Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.085095Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f98cde10e2754c8e","local-member-attributes":"{Name:no-preload-023824 ClientURLs:[https://192.168.61.232:2379]}","request-path":"/0/members/f98cde10e2754c8e/attributes","cluster-id":"b57bc7a6641489a","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T22:18:39.08532Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:39.085876Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b57bc7a6641489a","local-member-id":"f98cde10e2754c8e","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.085983Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.086013Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:39.087006Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:39.087875Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:39.088694Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:39.089649Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:18:39.093237Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.232:2379"}
	{"level":"info","ts":"2024-01-30T22:28:39.549383Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":676}
	{"level":"info","ts":"2024-01-30T22:28:39.551765Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":676,"took":"1.874323ms","hash":1124691146}
	{"level":"info","ts":"2024-01-30T22:28:39.551921Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1124691146,"revision":676,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T22:33:20.345756Z","caller":"traceutil/trace.go:171","msg":"trace[725697824] transaction","detail":"{read_only:false; response_revision:1147; number_of_response:1; }","duration":"272.864712ms","start":"2024-01-30T22:33:20.072838Z","end":"2024-01-30T22:33:20.345703Z","steps":["trace[725697824] 'process raft request'  (duration: 272.439364ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:33:34 up 20 min,  0 users,  load average: 0.16, 0.31, 0.26
	Linux no-preload-023824 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [fc1282976c3bfcc54c24363a48388973b2d82a5f7e28ec6f889787f9da518fa3] <==
	I0130 22:26:42.026647       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:28:41.027033       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:28:41.027175       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0130 22:28:42.027712       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:28:42.027935       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:28:42.027971       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:28:42.027732       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:28:42.028082       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:28:42.029369       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:29:42.028571       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:42.028650       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:29:42.028666       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:29:42.029867       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:29:42.030007       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:29:42.030052       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:31:42.029556       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:42.030024       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:31:42.030059       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:31:42.030138       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:42.030210       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:31:42.032264       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9c7ed3f938b7556c1658c6618399861f0fa953c8f60455062ad07e81c2891ea8] <==
	I0130 22:27:56.793055       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:26.317407       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:26.802168       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:56.324083       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:56.810872       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:26.330696       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:26.819629       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:56.335933       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:56.829277       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:30:00.733678       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="269.786µs"
	I0130 22:30:12.726510       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="100.842µs"
	E0130 22:30:26.342284       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:26.838138       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:56.348513       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:56.847869       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:26.354459       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:26.856567       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:56.359986       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:56.865387       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:26.365263       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:26.876180       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:56.373299       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:56.887043       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:26.381662       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:26.898461       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [a3c418d415d663a22f21517d30f28ba35a1b25db56ada925d45058d0af5dcc46] <==
	I0130 22:19:01.035924       1 server_others.go:72] "Using iptables proxy"
	I0130 22:19:01.061685       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.61.232"]
	I0130 22:19:01.181469       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:01.181538       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:01.181557       1 server_others.go:168] "Using iptables Proxier"
	I0130 22:19:01.191214       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:01.191576       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0130 22:19:01.191630       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:01.194144       1 config.go:188] "Starting service config controller"
	I0130 22:19:01.194206       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:01.194230       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:01.194265       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:01.196301       1 config.go:315] "Starting node config controller"
	I0130 22:19:01.196347       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:01.295360       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:01.295407       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:01.297050       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [725f7cb519d6cd69e8921c53b5286f19932572ad824edaa067ac07bbc244546a] <==
	W0130 22:18:41.050062       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:18:41.050103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:18:41.863064       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:18:41.863161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:18:42.028971       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:18:42.029087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0130 22:18:42.130463       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 22:18:42.130524       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 22:18:42.133313       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:42.133366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:42.155116       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:18:42.155194       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:18:42.159048       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:42.159108       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:42.179095       1 reflector.go:539] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:42.179147       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:42.183608       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:18:42.183680       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:18:42.192991       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:18:42.193037       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 22:18:42.207361       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:18:42.207481       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:18:42.283463       1 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:18:42.283630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0130 22:18:45.142386       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:13:18 UTC, ends at Tue 2024-01-30 22:33:34 UTC. --
	Jan 30 22:30:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:30:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:30:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:30:51 no-preload-023824 kubelet[4239]: E0130 22:30:51.706997    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:31:02 no-preload-023824 kubelet[4239]: E0130 22:31:02.707272    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:31:13 no-preload-023824 kubelet[4239]: E0130 22:31:13.707621    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:31:24 no-preload-023824 kubelet[4239]: E0130 22:31:24.707847    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:31:39 no-preload-023824 kubelet[4239]: E0130 22:31:39.708006    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:31:44 no-preload-023824 kubelet[4239]: E0130 22:31:44.738108    4239 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:31:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:31:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:31:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:31:53 no-preload-023824 kubelet[4239]: E0130 22:31:53.707998    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:32:05 no-preload-023824 kubelet[4239]: E0130 22:32:05.706714    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:32:19 no-preload-023824 kubelet[4239]: E0130 22:32:19.708352    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:32:31 no-preload-023824 kubelet[4239]: E0130 22:32:31.707480    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:32:44 no-preload-023824 kubelet[4239]: E0130 22:32:44.708076    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:32:44 no-preload-023824 kubelet[4239]: E0130 22:32:44.737153    4239 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:32:44 no-preload-023824 kubelet[4239]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:32:44 no-preload-023824 kubelet[4239]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:32:44 no-preload-023824 kubelet[4239]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:56 no-preload-023824 kubelet[4239]: E0130 22:32:56.708583    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:33:10 no-preload-023824 kubelet[4239]: E0130 22:33:10.707394    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:33:21 no-preload-023824 kubelet[4239]: E0130 22:33:21.707437    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	Jan 30 22:33:32 no-preload-023824 kubelet[4239]: E0130 22:33:32.710299    4239 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nvplb" podUID="04303a01-14e7-441d-876c-25425491cae6"
	
	
	==> storage-provisioner [e38cae605fb7b5cc392a5efb1ba1f8fed50ccfd70d0707af3bd594f27f7a9082] <==
	I0130 22:19:01.174072       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:01.190337       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:01.202361       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:01.227441       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:01.227627       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027!
	I0130 22:19:01.228606       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a498934e-fe5b-481a-835e-acf300322c01", APIVersion:"v1", ResourceVersion:"420", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027 became leader
	I0130 22:19:01.328396       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-023824_ef84bc8c-fcf0-4fc6-9afe-7fcb6c65e027!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-023824 -n no-preload-023824
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-023824 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nvplb
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb: exit status 1 (70.758304ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nvplb" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-023824 describe pod metrics-server-57f55c9bc5-nvplb: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (327.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (68.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713938 -n embed-certs-713938
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:33:50.505944073 +0000 UTC m=+5601.906886189
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-713938 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-713938 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.566µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-713938 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-713938 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-713938 logs -n 25: (1.274630646s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p kubernetes-upgrade-433652                           | kubernetes-upgrade-433652    | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	| delete  | -p                                                     | disable-driver-mounts-818908 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:04 UTC |
	|         | disable-driver-mounts-818908                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:04 UTC | 30 Jan 24 22:06 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-912992        | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC | 30 Jan 24 22:05 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:32 UTC |
	| start   | -p newest-cni-507807 --memory=2200 --alsologtostderr   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	| start   | -p auto-381927 --memory=3072                           | auto-381927                  | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-507807             | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-507807                                   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:33:36
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:33:36.118758  686843 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:33:36.119056  686843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:33:36.119068  686843 out.go:309] Setting ErrFile to fd 2...
	I0130 22:33:36.119073  686843 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:33:36.119255  686843 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:33:36.119900  686843 out.go:303] Setting JSON to false
	I0130 22:33:36.120910  686843 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11768,"bootTime":1706642248,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:33:36.120984  686843 start.go:138] virtualization: kvm guest
	I0130 22:33:36.123243  686843 out.go:177] * [auto-381927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:33:36.124896  686843 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:33:36.126359  686843 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:33:36.124903  686843 notify.go:220] Checking for updates...
	I0130 22:33:36.128099  686843 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:33:36.129561  686843 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:33:36.130820  686843 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:33:36.132138  686843 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:33:36.134135  686843 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:33:36.134252  686843 config.go:182] Loaded profile config "embed-certs-713938": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:33:36.134388  686843 config.go:182] Loaded profile config "newest-cni-507807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:33:36.134482  686843 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:33:36.171608  686843 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 22:33:36.173088  686843 start.go:298] selected driver: kvm2
	I0130 22:33:36.173106  686843 start.go:902] validating driver "kvm2" against <nil>
	I0130 22:33:36.173116  686843 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:33:36.174019  686843 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:33:36.174119  686843 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:33:36.189927  686843 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:33:36.189974  686843 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 22:33:36.190201  686843 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:33:36.190280  686843 cni.go:84] Creating CNI manager for ""
	I0130 22:33:36.190297  686843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:33:36.190314  686843 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 22:33:36.190323  686843 start_flags.go:321] config:
	{Name:auto-381927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:auto-381927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket
: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:33:36.190543  686843 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:33:36.192416  686843 out.go:177] * Starting control plane node auto-381927 in cluster auto-381927
	I0130 22:33:36.193638  686843 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:33:36.193697  686843 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:33:36.193713  686843 cache.go:56] Caching tarball of preloaded images
	I0130 22:33:36.193800  686843 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:33:36.193812  686843 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:33:36.193940  686843 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/config.json ...
	I0130 22:33:36.193963  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/config.json: {Name:mk5b30a87c1c314aa76eb7fbfbc1bd8576f88944 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:36.194126  686843 start.go:365] acquiring machines lock for auto-381927: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:33:36.194169  686843 start.go:369] acquired machines lock for "auto-381927" in 20.71µs
	I0130 22:33:36.194192  686843 start.go:93] Provisioning new machine with config: &{Name:auto-381927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.28.4 ClusterName:auto-381927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:33:36.194288  686843 start.go:125] createHost starting for "" (driver="kvm2")
	I0130 22:33:33.836229  686214 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0130 22:33:33.853168  686214 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0130 22:33:33.880407  686214 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0130 22:33:33.880506  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl label nodes minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5 minikube.k8s.io/name=newest-cni-507807 minikube.k8s.io/updated_at=2024_01_30T22_33_33_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:33.880508  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:34.284898  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:34.335053  686214 ops.go:34] apiserver oom_adj: -16
	I0130 22:33:34.785312  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:35.285764  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:35.785849  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:36.285454  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:36.785274  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:37.284991  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:37.785106  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:36.196064  686843 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0130 22:33:36.196203  686843 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:36.196243  686843 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:36.210410  686843 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46745
	I0130 22:33:36.210842  686843 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:36.211439  686843 main.go:141] libmachine: Using API Version  1
	I0130 22:33:36.211462  686843 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:36.211866  686843 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:36.212053  686843 main.go:141] libmachine: (auto-381927) Calling .GetMachineName
	I0130 22:33:36.212195  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:33:36.212605  686843 start.go:159] libmachine.API.Create for "auto-381927" (driver="kvm2")
	I0130 22:33:36.212639  686843 client.go:168] LocalClient.Create starting
	I0130 22:33:36.212675  686843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem
	I0130 22:33:36.212732  686843 main.go:141] libmachine: Decoding PEM data...
	I0130 22:33:36.212757  686843 main.go:141] libmachine: Parsing certificate...
	I0130 22:33:36.212849  686843 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem
	I0130 22:33:36.212876  686843 main.go:141] libmachine: Decoding PEM data...
	I0130 22:33:36.212894  686843 main.go:141] libmachine: Parsing certificate...
	I0130 22:33:36.212930  686843 main.go:141] libmachine: Running pre-create checks...
	I0130 22:33:36.212944  686843 main.go:141] libmachine: (auto-381927) Calling .PreCreateCheck
	I0130 22:33:36.214291  686843 main.go:141] libmachine: (auto-381927) Calling .GetConfigRaw
	I0130 22:33:36.214795  686843 main.go:141] libmachine: Creating machine...
	I0130 22:33:36.214814  686843 main.go:141] libmachine: (auto-381927) Calling .Create
	I0130 22:33:36.214954  686843 main.go:141] libmachine: (auto-381927) Creating KVM machine...
	I0130 22:33:36.216213  686843 main.go:141] libmachine: (auto-381927) DBG | found existing default KVM network
	I0130 22:33:36.217637  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.217459  686865 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:b9:e1:1d} reservation:<nil>}
	I0130 22:33:36.218743  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.218641  686865 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:04:77:60} reservation:<nil>}
	I0130 22:33:36.220030  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.219939  686865 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000281220}
	I0130 22:33:36.225412  686843 main.go:141] libmachine: (auto-381927) DBG | trying to create private KVM network mk-auto-381927 192.168.61.0/24...
	I0130 22:33:36.308549  686843 main.go:141] libmachine: (auto-381927) DBG | private KVM network mk-auto-381927 192.168.61.0/24 created
	I0130 22:33:36.308746  686843 main.go:141] libmachine: (auto-381927) Setting up store path in /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927 ...
	I0130 22:33:36.308780  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.308675  686865 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:33:36.308802  686843 main.go:141] libmachine: (auto-381927) Building disk image from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 22:33:36.308833  686843 main.go:141] libmachine: (auto-381927) Downloading /home/jenkins/minikube-integration/18014-640473/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso...
	I0130 22:33:36.557111  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.556974  686865 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa...
	I0130 22:33:36.704187  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.704034  686865 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/auto-381927.rawdisk...
	I0130 22:33:36.704224  686843 main.go:141] libmachine: (auto-381927) DBG | Writing magic tar header
	I0130 22:33:36.704241  686843 main.go:141] libmachine: (auto-381927) DBG | Writing SSH key tar header
	I0130 22:33:36.704264  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:36.704158  686865 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927 ...
	I0130 22:33:36.704277  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927
	I0130 22:33:36.704334  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927 (perms=drwx------)
	I0130 22:33:36.704364  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube/machines
	I0130 22:33:36.704380  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube/machines (perms=drwxr-xr-x)
	I0130 22:33:36.704399  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473/.minikube (perms=drwxr-xr-x)
	I0130 22:33:36.704415  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins/minikube-integration/18014-640473 (perms=drwxrwxr-x)
	I0130 22:33:36.704430  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:33:36.704440  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18014-640473
	I0130 22:33:36.704450  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0130 22:33:36.704461  686843 main.go:141] libmachine: (auto-381927) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0130 22:33:36.704476  686843 main.go:141] libmachine: (auto-381927) Creating domain...
	I0130 22:33:36.704493  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0130 22:33:36.704507  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home/jenkins
	I0130 22:33:36.704519  686843 main.go:141] libmachine: (auto-381927) DBG | Checking permissions on dir: /home
	I0130 22:33:36.704532  686843 main.go:141] libmachine: (auto-381927) DBG | Skipping /home - not owner
	I0130 22:33:36.705621  686843 main.go:141] libmachine: (auto-381927) define libvirt domain using xml: 
	I0130 22:33:36.705657  686843 main.go:141] libmachine: (auto-381927) <domain type='kvm'>
	I0130 22:33:36.705674  686843 main.go:141] libmachine: (auto-381927)   <name>auto-381927</name>
	I0130 22:33:36.705687  686843 main.go:141] libmachine: (auto-381927)   <memory unit='MiB'>3072</memory>
	I0130 22:33:36.705702  686843 main.go:141] libmachine: (auto-381927)   <vcpu>2</vcpu>
	I0130 22:33:36.705714  686843 main.go:141] libmachine: (auto-381927)   <features>
	I0130 22:33:36.705727  686843 main.go:141] libmachine: (auto-381927)     <acpi/>
	I0130 22:33:36.705738  686843 main.go:141] libmachine: (auto-381927)     <apic/>
	I0130 22:33:36.705750  686843 main.go:141] libmachine: (auto-381927)     <pae/>
	I0130 22:33:36.705759  686843 main.go:141] libmachine: (auto-381927)     
	I0130 22:33:36.705796  686843 main.go:141] libmachine: (auto-381927)   </features>
	I0130 22:33:36.705835  686843 main.go:141] libmachine: (auto-381927)   <cpu mode='host-passthrough'>
	I0130 22:33:36.705848  686843 main.go:141] libmachine: (auto-381927)   
	I0130 22:33:36.705865  686843 main.go:141] libmachine: (auto-381927)   </cpu>
	I0130 22:33:36.705878  686843 main.go:141] libmachine: (auto-381927)   <os>
	I0130 22:33:36.705890  686843 main.go:141] libmachine: (auto-381927)     <type>hvm</type>
	I0130 22:33:36.705924  686843 main.go:141] libmachine: (auto-381927)     <boot dev='cdrom'/>
	I0130 22:33:36.705949  686843 main.go:141] libmachine: (auto-381927)     <boot dev='hd'/>
	I0130 22:33:36.705966  686843 main.go:141] libmachine: (auto-381927)     <bootmenu enable='no'/>
	I0130 22:33:36.705978  686843 main.go:141] libmachine: (auto-381927)   </os>
	I0130 22:33:36.705992  686843 main.go:141] libmachine: (auto-381927)   <devices>
	I0130 22:33:36.706006  686843 main.go:141] libmachine: (auto-381927)     <disk type='file' device='cdrom'>
	I0130 22:33:36.706043  686843 main.go:141] libmachine: (auto-381927)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/boot2docker.iso'/>
	I0130 22:33:36.706066  686843 main.go:141] libmachine: (auto-381927)       <target dev='hdc' bus='scsi'/>
	I0130 22:33:36.706076  686843 main.go:141] libmachine: (auto-381927)       <readonly/>
	I0130 22:33:36.706084  686843 main.go:141] libmachine: (auto-381927)     </disk>
	I0130 22:33:36.706099  686843 main.go:141] libmachine: (auto-381927)     <disk type='file' device='disk'>
	I0130 22:33:36.706110  686843 main.go:141] libmachine: (auto-381927)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0130 22:33:36.706126  686843 main.go:141] libmachine: (auto-381927)       <source file='/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/auto-381927.rawdisk'/>
	I0130 22:33:36.706139  686843 main.go:141] libmachine: (auto-381927)       <target dev='hda' bus='virtio'/>
	I0130 22:33:36.706152  686843 main.go:141] libmachine: (auto-381927)     </disk>
	I0130 22:33:36.706164  686843 main.go:141] libmachine: (auto-381927)     <interface type='network'>
	I0130 22:33:36.706177  686843 main.go:141] libmachine: (auto-381927)       <source network='mk-auto-381927'/>
	I0130 22:33:36.706187  686843 main.go:141] libmachine: (auto-381927)       <model type='virtio'/>
	I0130 22:33:36.706199  686843 main.go:141] libmachine: (auto-381927)     </interface>
	I0130 22:33:36.706208  686843 main.go:141] libmachine: (auto-381927)     <interface type='network'>
	I0130 22:33:36.706247  686843 main.go:141] libmachine: (auto-381927)       <source network='default'/>
	I0130 22:33:36.706268  686843 main.go:141] libmachine: (auto-381927)       <model type='virtio'/>
	I0130 22:33:36.706282  686843 main.go:141] libmachine: (auto-381927)     </interface>
	I0130 22:33:36.706294  686843 main.go:141] libmachine: (auto-381927)     <serial type='pty'>
	I0130 22:33:36.706306  686843 main.go:141] libmachine: (auto-381927)       <target port='0'/>
	I0130 22:33:36.706318  686843 main.go:141] libmachine: (auto-381927)     </serial>
	I0130 22:33:36.706349  686843 main.go:141] libmachine: (auto-381927)     <console type='pty'>
	I0130 22:33:36.706367  686843 main.go:141] libmachine: (auto-381927)       <target type='serial' port='0'/>
	I0130 22:33:36.706378  686843 main.go:141] libmachine: (auto-381927)     </console>
	I0130 22:33:36.706404  686843 main.go:141] libmachine: (auto-381927)     <rng model='virtio'>
	I0130 22:33:36.706419  686843 main.go:141] libmachine: (auto-381927)       <backend model='random'>/dev/random</backend>
	I0130 22:33:36.706429  686843 main.go:141] libmachine: (auto-381927)     </rng>
	I0130 22:33:36.706443  686843 main.go:141] libmachine: (auto-381927)     
	I0130 22:33:36.706453  686843 main.go:141] libmachine: (auto-381927)     
	I0130 22:33:36.706466  686843 main.go:141] libmachine: (auto-381927)   </devices>
	I0130 22:33:36.706477  686843 main.go:141] libmachine: (auto-381927) </domain>
	I0130 22:33:36.706489  686843 main.go:141] libmachine: (auto-381927) 
	I0130 22:33:36.710727  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:75:bd:25 in network default
	I0130 22:33:36.711370  686843 main.go:141] libmachine: (auto-381927) Ensuring networks are active...
	I0130 22:33:36.711390  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:36.712220  686843 main.go:141] libmachine: (auto-381927) Ensuring network default is active
	I0130 22:33:36.712558  686843 main.go:141] libmachine: (auto-381927) Ensuring network mk-auto-381927 is active
	I0130 22:33:36.713047  686843 main.go:141] libmachine: (auto-381927) Getting domain xml...
	I0130 22:33:36.713952  686843 main.go:141] libmachine: (auto-381927) Creating domain...
	I0130 22:33:37.988963  686843 main.go:141] libmachine: (auto-381927) Waiting to get IP...
	I0130 22:33:37.989865  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:37.990408  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:37.990438  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:37.990362  686865 retry.go:31] will retry after 220.487148ms: waiting for machine to come up
	I0130 22:33:38.212962  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:38.213518  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:38.213581  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:38.213484  686865 retry.go:31] will retry after 386.94742ms: waiting for machine to come up
	I0130 22:33:38.602168  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:38.602807  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:38.602835  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:38.602743  686865 retry.go:31] will retry after 348.668482ms: waiting for machine to come up
	I0130 22:33:38.953399  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:38.953855  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:38.953886  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:38.953801  686865 retry.go:31] will retry after 477.255339ms: waiting for machine to come up
	I0130 22:33:39.432489  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:39.433025  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:39.433060  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:39.432964  686865 retry.go:31] will retry after 499.471344ms: waiting for machine to come up
	I0130 22:33:39.934548  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:39.935033  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:39.935066  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:39.934967  686865 retry.go:31] will retry after 820.385175ms: waiting for machine to come up
	I0130 22:33:40.757024  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:40.757399  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:40.757433  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:40.757351  686865 retry.go:31] will retry after 910.762251ms: waiting for machine to come up
	I0130 22:33:38.285853  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:38.784973  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:39.285506  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:39.785313  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:40.284931  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:40.784933  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:41.285621  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:41.785349  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:42.285498  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:42.785697  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:43.285283  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:43.785883  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:44.285118  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:44.785180  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:45.285413  686214 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0130 22:33:45.433777  686214 kubeadm.go:1088] duration metric: took 11.553336789s to wait for elevateKubeSystemPrivileges.
	I0130 22:33:45.433820  686214 kubeadm.go:406] StartCluster complete in 24.777827638s
	I0130 22:33:45.433845  686214 settings.go:142] acquiring lock: {Name:mkf52d9b515235198504d48dd921760d5b0c99a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:45.433960  686214 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:33:45.436333  686214 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/kubeconfig: {Name:mkce90553d829bbf6441b9724d8a5f7b9b8eb39a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:45.436626  686214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0130 22:33:45.436720  686214 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0130 22:33:45.436780  686214 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-507807"
	I0130 22:33:45.436794  686214 addons.go:69] Setting default-storageclass=true in profile "newest-cni-507807"
	I0130 22:33:45.436817  686214 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-507807"
	I0130 22:33:45.436828  686214 addons.go:234] Setting addon storage-provisioner=true in "newest-cni-507807"
	I0130 22:33:45.436857  686214 config.go:182] Loaded profile config "newest-cni-507807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:33:45.436906  686214 host.go:66] Checking if "newest-cni-507807" exists ...
	I0130 22:33:45.437412  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:45.437427  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:45.437448  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:45.437449  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:45.454749  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45553
	I0130 22:33:45.455274  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:45.455806  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:33:45.455828  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:45.456657  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45851
	I0130 22:33:45.456688  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:45.457247  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:45.457264  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetState
	I0130 22:33:45.458085  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:33:45.458113  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:45.458599  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:45.459237  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:45.459284  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:45.461510  686214 addons.go:234] Setting addon default-storageclass=true in "newest-cni-507807"
	I0130 22:33:45.461577  686214 host.go:66] Checking if "newest-cni-507807" exists ...
	I0130 22:33:45.462098  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:45.462156  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:45.479317  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46219
	I0130 22:33:45.479967  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:45.480621  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:33:45.480650  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:45.481711  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:45.481962  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetState
	I0130 22:33:45.483460  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40915
	I0130 22:33:45.483856  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:45.485279  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:33:45.485295  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:45.485349  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:45.486258  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:45.488134  686214 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0130 22:33:45.486881  686214 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:33:45.489479  686214 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:33:45.489678  686214 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:33:45.489703  686214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0130 22:33:45.489726  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:45.493411  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:45.493887  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:45.493914  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:45.494207  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:45.494456  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:45.494649  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:45.494814  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:45.507839  686214 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45639
	I0130 22:33:45.508487  686214 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:33:45.509037  686214 main.go:141] libmachine: Using API Version  1
	I0130 22:33:45.509066  686214 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:33:45.509606  686214 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:33:45.509895  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetState
	I0130 22:33:45.511806  686214 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:33:45.513805  686214 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0130 22:33:45.513830  686214 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0130 22:33:45.513854  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHHostname
	I0130 22:33:45.516925  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:45.517399  686214 main.go:141] libmachine: (newest-cni-507807) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:65:8c:48", ip: ""} in network mk-newest-cni-507807: {Iface:virbr3 ExpiryTime:2024-01-30 23:33:04 +0000 UTC Type:0 Mac:52:54:00:65:8c:48 Iaid: IPaddr:192.168.39.100 Prefix:24 Hostname:newest-cni-507807 Clientid:01:52:54:00:65:8c:48}
	I0130 22:33:45.517422  686214 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined IP address 192.168.39.100 and MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:33:45.517453  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHPort
	I0130 22:33:45.517636  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHKeyPath
	I0130 22:33:45.517820  686214 main.go:141] libmachine: (newest-cni-507807) Calling .GetSSHUsername
	I0130 22:33:45.518026  686214 sshutil.go:53] new ssh client: &{IP:192.168.39.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/newest-cni-507807/id_rsa Username:docker}
	I0130 22:33:45.602645  686214 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0130 22:33:45.680014  686214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0130 22:33:45.709539  686214 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0130 22:33:45.976387  686214 kapi.go:248] "coredns" deployment in "kube-system" namespace and "newest-cni-507807" context rescaled to 1 replicas
	I0130 22:33:45.976445  686214 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.100 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0130 22:33:45.978297  686214 out.go:177] * Verifying Kubernetes components...
	I0130 22:33:41.669979  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:41.670447  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:41.670471  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:41.670394  686865 retry.go:31] will retry after 1.424578016s: waiting for machine to come up
	I0130 22:33:43.096585  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:43.097078  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:43.097105  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:43.097037  686865 retry.go:31] will retry after 1.770115604s: waiting for machine to come up
	I0130 22:33:44.868768  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:44.869350  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:44.869384  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:44.869293  686865 retry.go:31] will retry after 1.679520449s: waiting for machine to come up
	I0130 22:33:45.979600  686214 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 22:33:46.183803  686214 start.go:929] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0130 22:33:46.404856  686214 main.go:141] libmachine: Making call to close driver server
	I0130 22:33:46.404890  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Close
	I0130 22:33:46.405038  686214 main.go:141] libmachine: Making call to close driver server
	I0130 22:33:46.405074  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Close
	I0130 22:33:46.405623  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Closing plugin on server side
	I0130 22:33:46.405657  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Closing plugin on server side
	I0130 22:33:46.406630  686214 api_server.go:52] waiting for apiserver process to appear ...
	I0130 22:33:46.406690  686214 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 22:33:46.407235  686214 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:33:46.407253  686214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:33:46.407264  686214 main.go:141] libmachine: Making call to close driver server
	I0130 22:33:46.407290  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Close
	I0130 22:33:46.407771  686214 main.go:141] libmachine: Failed to make call to close driver server: unexpected EOF
	I0130 22:33:46.407792  686214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:33:46.407804  686214 main.go:141] libmachine: Making call to close driver server
	I0130 22:33:46.407820  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Close
	I0130 22:33:46.407980  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Closing plugin on server side
	I0130 22:33:46.408008  686214 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:33:46.408035  686214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:33:46.408355  686214 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:33:46.408374  686214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:33:46.428145  686214 main.go:141] libmachine: Making call to close driver server
	I0130 22:33:46.428173  686214 main.go:141] libmachine: (newest-cni-507807) Calling .Close
	I0130 22:33:46.428491  686214 main.go:141] libmachine: (newest-cni-507807) DBG | Closing plugin on server side
	I0130 22:33:46.428507  686214 main.go:141] libmachine: Successfully made call to close driver server
	I0130 22:33:46.428568  686214 main.go:141] libmachine: Making call to close connection to plugin binary
	I0130 22:33:46.429995  686214 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0130 22:33:46.431562  686214 addons.go:505] enable addons completed in 994.848798ms: enabled=[storage-provisioner default-storageclass]
	I0130 22:33:46.445431  686214 api_server.go:72] duration metric: took 468.93529ms to wait for apiserver process to appear ...
	I0130 22:33:46.445457  686214 api_server.go:88] waiting for apiserver healthz status ...
	I0130 22:33:46.445499  686214 api_server.go:253] Checking apiserver healthz at https://192.168.39.100:8443/healthz ...
	I0130 22:33:46.453923  686214 api_server.go:279] https://192.168.39.100:8443/healthz returned 200:
	ok
	I0130 22:33:46.455733  686214 api_server.go:141] control plane version: v1.29.0-rc.2
	I0130 22:33:46.455761  686214 api_server.go:131] duration metric: took 10.296037ms to wait for apiserver health ...
	I0130 22:33:46.455772  686214 system_pods.go:43] waiting for kube-system pods to appear ...
	I0130 22:33:46.479234  686214 system_pods.go:59] 6 kube-system pods found
	I0130 22:33:46.479265  686214 system_pods.go:61] "etcd-newest-cni-507807" [b548fc86-58f1-4340-b7f3-d735dd8f5c55] Running
	I0130 22:33:46.479274  686214 system_pods.go:61] "kube-apiserver-newest-cni-507807" [2ad5093a-5903-41f0-98ea-254becb362b3] Running
	I0130 22:33:46.479280  686214 system_pods.go:61] "kube-controller-manager-newest-cni-507807" [b5e44687-889e-4740-834e-fc299d645405] Running
	I0130 22:33:46.479287  686214 system_pods.go:61] "kube-proxy-4s95f" [08f6ab9b-5f3b-414d-a602-d675171d5dd1] Pending
	I0130 22:33:46.479294  686214 system_pods.go:61] "kube-scheduler-newest-cni-507807" [16c48cfe-c27c-4f56-b4ba-f620897affd2] Running
	I0130 22:33:46.479305  686214 system_pods.go:61] "storage-provisioner" [08de44b0-8f05-49fb-91ff-fcd85e15fe5d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0130 22:33:46.479315  686214 system_pods.go:74] duration metric: took 23.534754ms to wait for pod list to return data ...
	I0130 22:33:46.479328  686214 default_sa.go:34] waiting for default service account to be created ...
	I0130 22:33:46.482635  686214 default_sa.go:45] found service account: "default"
	I0130 22:33:46.482664  686214 default_sa.go:55] duration metric: took 3.328068ms for default service account to be created ...
	I0130 22:33:46.482676  686214 kubeadm.go:581] duration metric: took 506.192125ms to wait for : map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] ...
	I0130 22:33:46.482696  686214 node_conditions.go:102] verifying NodePressure condition ...
	I0130 22:33:46.493339  686214 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0130 22:33:46.493381  686214 node_conditions.go:123] node cpu capacity is 2
	I0130 22:33:46.493438  686214 node_conditions.go:105] duration metric: took 10.735101ms to run NodePressure ...
	I0130 22:33:46.493454  686214 start.go:228] waiting for startup goroutines ...
	I0130 22:33:46.493484  686214 start.go:233] waiting for cluster config update ...
	I0130 22:33:46.493501  686214 start.go:242] writing updated cluster config ...
	I0130 22:33:46.493793  686214 ssh_runner.go:195] Run: rm -f paused
	I0130 22:33:46.569881  686214 start.go:600] kubectl: 1.29.1, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0130 22:33:46.571728  686214 out.go:177] * Done! kubectl is now configured to use "newest-cni-507807" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:13:40 UTC, ends at Tue 2024-01-30 22:33:51 UTC. --
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.372519141Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654031372505152,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=a9dd3029-360c-42a4-8461-bc9caf4f0607 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.372965916Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=13780013-b89e-4ee5-93ee-ca4c885fe15f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.373009372Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=13780013-b89e-4ee5-93ee-ca4c885fe15f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.373209376Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=13780013-b89e-4ee5-93ee-ca4c885fe15f name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.409209262Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=d96e9def-af85-42ba-9651-71e4800e06db name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.409256639Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=d96e9def-af85-42ba-9651-71e4800e06db name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.410071398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=51a00f24-8b9d-4b2e-b30a-72000eb4d1c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.410533457Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654031410521786,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=51a00f24-8b9d-4b2e-b30a-72000eb4d1c4 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.410971611Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a2c2dc0-f916-4864-8b1a-b2ebf69a440b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.411012837Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a2c2dc0-f916-4864-8b1a-b2ebf69a440b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.411235101Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a2c2dc0-f916-4864-8b1a-b2ebf69a440b name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.444267934Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=90e67caf-db0d-4402-acce-dbaf7ca6f5a1 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.444314435Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=90e67caf-db0d-4402-acce-dbaf7ca6f5a1 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.444952458Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=0ecfcd7f-49a4-4c99-a705-130366a51a2b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.445342262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654031445330161,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=0ecfcd7f-49a4-4c99-a705-130366a51a2b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.445768730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81807ace-8c96-4c93-998d-3d9b07095da0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.445835840Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81807ace-8c96-4c93-998d-3d9b07095da0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.445986516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81807ace-8c96-4c93-998d-3d9b07095da0 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.481989845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=bbd3866d-3a1f-485b-af5a-6e8da3fe5777 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.482064358Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=bbd3866d-3a1f-485b-af5a-6e8da3fe5777 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.483348287Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=62c2211e-e4cb-484a-aef8-5b398683b3ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.483695494Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654031483685423,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=62c2211e-e4cb-484a-aef8-5b398683b3ce name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.484261693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=834460c3-a411-456c-826e-bb7b01c08714 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.484326824Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=834460c3-a411-456c-826e-bb7b01c08714 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:33:51 embed-certs-713938 crio[727]: time="2024-01-30 22:33:51.484482015Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267,PodSandboxId:7a63e92d8d9813a22a0744fe8bb0e822df1b6ddb0d2081eba5596996fe4e4f05,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653145196593655,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2812b55-cbd5-411d-b217-0b902e49285b,},Annotations:map[string]string{io.kubernetes.container.hash: 6f854848,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810,PodSandboxId:f9754515a75f02885b333fc934889aa66cc321b5aaebed0307aad6ab8cfcc4e4,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653144554246510,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-l6hkm,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6309cb30-acf7-4925-996d-f059ffe5d3c1,},Annotations:map[string]string{io.kubernetes.container.hash: 6fb5ff5c,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568,PodSandboxId:cbdbcb8601a8867c8dc5c3c1884527a03cff2591970caf3b46067ab79f81611b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653142578862277,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f7mgv,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 57f78a6b-c2f9-471e-9861-8b74fd700ecf,},Annotations:map[string]string{io.kubernetes.container.hash: 66210b12,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb,PodSandboxId:7d3cd0f6c8749de394d991ae83b90c83b0055fda3a4c6854c9d7f7a08da38628,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653121327705450,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffe3824975fcbe5355497f363b9b40b7,},An
notations:map[string]string{io.kubernetes.container.hash: ef106f50,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c,PodSandboxId:16d036c206b52b2f303d5027e9373905df530fcc7d087830596cdfb890982751,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653120717351616,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a0923d3cb589709f06699bb7da7210af,},Annotations:
map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8,PodSandboxId:67d7759b1a42e9db45dbc80c6bbc78dc03af61c54ee8131ffcb37302241bd5de,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653120563800479,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e4388c8af927ab2894a84
5a4e478f947,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef,PodSandboxId:6bd300c0020491cb790d718b4c1d994b03d0a3fe3d463d50bdc0fbf257a03e28,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653120476614887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-713938,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e298308706595093a6bbcd120902c17
d,},Annotations:map[string]string{io.kubernetes.container.hash: 58d64642,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=834460c3-a411-456c-826e-bb7b01c08714 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c736f58404008       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   7a63e92d8d981       storage-provisioner
	3a8cdd739a326       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   f9754515a75f0       coredns-5dd5756b68-l6hkm
	40781d148e717       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   cbdbcb8601a88       kube-proxy-f7mgv
	7824c0af9e71a       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   7d3cd0f6c8749       etcd-embed-certs-713938
	30becb2331dfc       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   16d036c206b52       kube-scheduler-embed-certs-713938
	57a4b15732d48       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   67d7759b1a42e       kube-controller-manager-embed-certs-713938
	59033ddbd5513       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   6bd300c002049       kube-apiserver-embed-certs-713938
	
	
	==> coredns [3a8cdd739a3262dc530529fa5426cc2810a546bfed8b3f48e9725b7db1b20810] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:56930 - 1726 "HINFO IN 1646608755236289111.2736373352341829840. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.048660713s
	
	
	==> describe nodes <==
	Name:               embed-certs-713938
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-713938
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=embed-certs-713938
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_18_49_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:18:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-713938
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:33:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:29:22 +0000   Tue, 30 Jan 2024 22:18:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.213
	  Hostname:    embed-certs-713938
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 bb3b4d72af244a1cbed79c8534019bb6
	  System UUID:                bb3b4d72-af24-4a1c-bed7-9c8534019bb6
	  Boot ID:                    10a335bc-5ba6-4630-81ca-783257ec95f2
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-l6hkm                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-embed-certs-713938                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-713938             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-713938    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-f7mgv                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-embed-certs-713938             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-vhxng               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-713938 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-713938 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-713938 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-713938 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-713938 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-713938 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-713938 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m                kubelet          Node embed-certs-713938 status is now: NodeReady
	  Normal  RegisteredNode           14m                node-controller  Node embed-certs-713938 event: Registered Node embed-certs-713938 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.066909] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.391101] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.238302] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.158925] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.514873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000023] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.350319] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.111059] systemd-fstab-generator[664]: Ignoring "noauto" for root device
	[  +0.148849] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.126783] systemd-fstab-generator[688]: Ignoring "noauto" for root device
	[  +0.225193] systemd-fstab-generator[712]: Ignoring "noauto" for root device
	[Jan30 22:14] systemd-fstab-generator[925]: Ignoring "noauto" for root device
	[ +18.866960] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:18] systemd-fstab-generator[3516]: Ignoring "noauto" for root device
	[  +9.778008] systemd-fstab-generator[3844]: Ignoring "noauto" for root device
	[Jan30 22:19] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [7824c0af9e71aa27fa76641daef80c1551bcf0a04022e66e4110be9f5360c5cb] <==
	{"level":"info","ts":"2024-01-30T22:18:42.921802Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-01-30T22:18:43.598492Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab is starting a new election at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.598591Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became pre-candidate at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.598626Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab received MsgPreVoteResp from abef9893912f41ab at term 1"}
	{"level":"info","ts":"2024-01-30T22:18:43.59867Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became candidate at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598694Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab received MsgVoteResp from abef9893912f41ab at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"abef9893912f41ab became leader at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.598746Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: abef9893912f41ab elected leader abef9893912f41ab at term 2"}
	{"level":"info","ts":"2024-01-30T22:18:43.600333Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"abef9893912f41ab","local-member-attributes":"{Name:embed-certs-713938 ClientURLs:[https://192.168.72.213:2379]}","request-path":"/0/members/abef9893912f41ab/attributes","cluster-id":"dcbc1fe92b491f0f","publish-timeout":"7s"}
	{"level":"info","ts":"2024-01-30T22:18:43.600535Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:43.6007Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:18:43.601827Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:18:43.602015Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:43.602054Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:18:43.60217Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.213:2379"}
	{"level":"info","ts":"2024-01-30T22:18:43.602274Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.606841Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"dcbc1fe92b491f0f","local-member-id":"abef9893912f41ab","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.606988Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:18:43.607038Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:28:43.87845Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":714}
	{"level":"info","ts":"2024-01-30T22:28:43.880961Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":714,"took":"2.099617ms","hash":4222840481}
	{"level":"info","ts":"2024-01-30T22:28:43.881041Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4222840481,"revision":714,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T22:33:43.888649Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":956}
	{"level":"info","ts":"2024-01-30T22:33:43.890878Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":956,"took":"1.397276ms","hash":2563860287}
	{"level":"info","ts":"2024-01-30T22:33:43.89126Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2563860287,"revision":956,"compact-revision":714}
	
	
	==> kernel <==
	 22:33:51 up 20 min,  0 users,  load average: 0.15, 0.17, 0.16
	Linux embed-certs-713938 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [59033ddbd5513f3f5b41158298eceb9c9c791009282f3306b729b8a14b899cef] <==
	E0130 22:29:46.498504       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:29:46.498514       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:30:45.386301       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:31:45.386576       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:31:46.497567       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:46.497626       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:31:46.497644       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:31:46.498856       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:31:46.498961       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:31:46.498969       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:32:45.386238       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:33:45.386388       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:33:45.501982       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:33:45.502158       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:33:45.502936       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:33:46.502661       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:33:46.502937       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:33:46.503019       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:33:46.503306       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:33:46.503330       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:33:46.506464       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [57a4b15732d48cafab63825ff5501f04b2e95f63e0a6656fb6ff3f309f4367d8] <==
	I0130 22:28:01.045265       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:28:30.555359       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:28:31.055991       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:00.561955       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:01.065801       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:30.567877       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:31.075142       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:00.574493       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:01.083920       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:30:12.970854       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="295.388µs"
	I0130 22:30:23.969944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="144.117µs"
	E0130 22:30:30.581650       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:31.092379       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:00.588566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:01.103321       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:30.595418       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:31.116869       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:00.602505       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:01.129712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:30.609223       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:31.138929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:00.615272       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:01.148803       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:30.621676       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:31.159166       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [40781d148e717d2fe0cd809217c8ebb7f4dba4e1b79135896a9f7e14dc1ce568] <==
	I0130 22:19:04.106893       1 server_others.go:69] "Using iptables proxy"
	I0130 22:19:04.237886       1 node.go:141] Successfully retrieved node IP: 192.168.72.213
	I0130 22:19:04.633360       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:04.633537       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:04.672725       1 server_others.go:152] "Using iptables Proxier"
	I0130 22:19:04.724058       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:04.725906       1 server.go:846] "Version info" version="v1.28.4"
	I0130 22:19:04.730214       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:04.733887       1 config.go:188] "Starting service config controller"
	I0130 22:19:04.734245       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:04.734350       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:04.734403       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:04.739695       1 config.go:315] "Starting node config controller"
	I0130 22:19:04.739735       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:04.839819       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:04.839900       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:04.839921       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [30becb2331dfc4d372b20022c232a2653458c064e8d3f59d182eee511be8907c] <==
	W0130 22:18:45.531452       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:45.531462       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:46.378499       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:18:46.378570       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:18:46.444574       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0130 22:18:46.444740       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0130 22:18:46.522417       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:18:46.522467       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:18:46.524493       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:46.524512       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:46.547956       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0130 22:18:46.548038       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0130 22:18:46.605644       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:18:46.605745       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 22:18:46.684988       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0130 22:18:46.685052       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0130 22:18:46.693398       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:18:46.693493       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:18:46.695828       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:18:46.695885       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:18:46.738002       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:18:46.738054       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:18:46.785909       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0130 22:18:46.785960       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0130 22:18:48.713197       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:13:40 UTC, ends at Tue 2024-01-30 22:33:52 UTC. --
	Jan 30 22:31:19 embed-certs-713938 kubelet[3851]: E0130 22:31:19.953746    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:33 embed-certs-713938 kubelet[3851]: E0130 22:31:33.953234    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:48 embed-certs-713938 kubelet[3851]: E0130 22:31:48.953957    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]: E0130 22:31:49.087349    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:31:49 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:02 embed-certs-713938 kubelet[3851]: E0130 22:32:02.954235    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:16 embed-certs-713938 kubelet[3851]: E0130 22:32:16.953304    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:27 embed-certs-713938 kubelet[3851]: E0130 22:32:27.953743    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:38 embed-certs-713938 kubelet[3851]: E0130 22:32:38.955309    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:32:49 embed-certs-713938 kubelet[3851]: E0130 22:32:49.082514    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:32:49 embed-certs-713938 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:32:49 embed-certs-713938 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:32:49 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:50 embed-certs-713938 kubelet[3851]: E0130 22:32:50.955149    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:33:03 embed-certs-713938 kubelet[3851]: E0130 22:33:03.953710    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:33:16 embed-certs-713938 kubelet[3851]: E0130 22:33:16.956581    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:33:28 embed-certs-713938 kubelet[3851]: E0130 22:33:28.953722    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:33:40 embed-certs-713938 kubelet[3851]: E0130 22:33:40.955680    3851 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-vhxng" podUID="87663986-4226-44fc-9eea-43dd94a12fae"
	Jan 30 22:33:49 embed-certs-713938 kubelet[3851]: E0130 22:33:49.088788    3851 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:33:49 embed-certs-713938 kubelet[3851]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:33:49 embed-certs-713938 kubelet[3851]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:33:49 embed-certs-713938 kubelet[3851]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:33:49 embed-certs-713938 kubelet[3851]: E0130 22:33:49.177531    3851 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	
	
	==> storage-provisioner [c736f584040081d9c8803349751688b069548ab097c8977f99046781967f6267] <==
	I0130 22:19:05.354010       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:05.369946       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:05.370692       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:05.386065       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:05.387534       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222!
	I0130 22:19:05.386790       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"99103890-2a9b-434b-b83a-f09cc284a485", APIVersion:"v1", ResourceVersion:"459", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222 became leader
	I0130 22:19:05.488521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-713938_28bf9998-97c9-42a3-8688-1380b0cd3222!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-713938 -n embed-certs-713938
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-713938 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-vhxng
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng: exit status 1 (61.79388ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-vhxng" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-713938 describe pod metrics-server-57f55c9bc5-vhxng: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (68.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (71.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-01-30 22:34:22.424652899 +0000 UTC m=+5633.825595037
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.724µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-850803 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-850803 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-850803 logs -n 25: (1.208532941s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:05 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-023824             | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-713938            | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-850803  | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC | 30 Jan 24 22:06 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:06 UTC |                     |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-912992             | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:15 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-023824                  | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-713938                 | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:19 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| start   | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC | 30 Jan 24 22:23 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-850803       | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:08 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-850803 | jenkins | v1.32.0 | 30 Jan 24 22:09 UTC | 30 Jan 24 22:24 UTC |
	|         | default-k8s-diff-port-850803                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-912992                              | old-k8s-version-912992       | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:32 UTC |
	| start   | -p newest-cni-507807 --memory=2200 --alsologtostderr   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:32 UTC | 30 Jan 24 22:33 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-023824                                   | no-preload-023824            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	| start   | -p auto-381927 --memory=3072                           | auto-381927                  | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-507807             | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-507807                                   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-507807                  | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-507807 --memory=2200 --alsologtostderr   | newest-cni-507807            | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p embed-certs-713938                                  | embed-certs-713938           | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC | 30 Jan 24 22:33 UTC |
	| start   | -p kindnet-381927                                      | kindnet-381927               | jenkins | v1.32.0 | 30 Jan 24 22:33 UTC |                     |
	|         | --memory=3072                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --cni=kindnet --driver=kvm2                            |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 22:33:53
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 22:33:53.455650  687381 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:33:53.455752  687381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:33:53.455759  687381 out.go:309] Setting ErrFile to fd 2...
	I0130 22:33:53.455764  687381 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:33:53.455953  687381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:33:53.456537  687381 out.go:303] Setting JSON to false
	I0130 22:33:53.457499  687381 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":11786,"bootTime":1706642248,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:33:53.457571  687381 start.go:138] virtualization: kvm guest
	I0130 22:33:53.459933  687381 out.go:177] * [kindnet-381927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:33:53.461451  687381 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:33:53.461461  687381 notify.go:220] Checking for updates...
	I0130 22:33:53.462936  687381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:33:53.464531  687381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:33:53.465905  687381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:33:53.467334  687381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:33:53.468650  687381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:33:53.470257  687381 config.go:182] Loaded profile config "auto-381927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:33:53.470394  687381 config.go:182] Loaded profile config "default-k8s-diff-port-850803": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:33:53.470523  687381 config.go:182] Loaded profile config "newest-cni-507807": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0130 22:33:53.470611  687381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:33:53.507650  687381 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 22:33:53.509084  687381 start.go:298] selected driver: kvm2
	I0130 22:33:53.509112  687381 start.go:902] validating driver "kvm2" against <nil>
	I0130 22:33:53.509134  687381 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:33:53.509939  687381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:33:53.510022  687381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 22:33:53.525765  687381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 22:33:53.525806  687381 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 22:33:53.526003  687381 start_flags.go:927] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0130 22:33:53.526072  687381 cni.go:84] Creating CNI manager for "kindnet"
	I0130 22:33:53.526084  687381 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0130 22:33:53.526094  687381 start_flags.go:321] config:
	{Name:kindnet-381927 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:kindnet-381927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRI
Socket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:33:53.526248  687381 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 22:33:53.528033  687381 out.go:177] * Starting control plane node kindnet-381927 in cluster kindnet-381927
	I0130 22:33:52.328415  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:52.328836  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:52.328864  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:52.328794  686865 retry.go:31] will retry after 4.24875503s: waiting for machine to come up
	I0130 22:33:51.283862  687229 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0130 22:33:51.283901  687229 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0130 22:33:51.283911  687229 cache.go:56] Caching tarball of preloaded images
	I0130 22:33:51.284007  687229 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:33:51.284021  687229 cache.go:59] Finished verifying existence of preloaded tar for  v1.29.0-rc.2 on crio
	I0130 22:33:51.284160  687229 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/newest-cni-507807/config.json ...
	I0130 22:33:51.284397  687229 start.go:365] acquiring machines lock for newest-cni-507807: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:33:53.529312  687381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:33:53.529348  687381 preload.go:148] Found local preload: /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 22:33:53.529358  687381 cache.go:56] Caching tarball of preloaded images
	I0130 22:33:53.529454  687381 preload.go:174] Found /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0130 22:33:53.529482  687381 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 22:33:53.529586  687381 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/kindnet-381927/config.json ...
	I0130 22:33:53.529619  687381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/kindnet-381927/config.json: {Name:mk085f7705154a0884bd030bdf15dd088dbda1cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:33:53.529776  687381 start.go:365] acquiring machines lock for kindnet-381927: {Name:mk228072be03481832f0d42af38622afc0ea7fa5 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0130 22:33:56.578694  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:33:56.579122  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find current IP address of domain auto-381927 in network mk-auto-381927
	I0130 22:33:56.579148  686843 main.go:141] libmachine: (auto-381927) DBG | I0130 22:33:56.579079  686865 retry.go:31] will retry after 3.786471892s: waiting for machine to come up
	I0130 22:34:00.370255  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:00.370680  686843 main.go:141] libmachine: (auto-381927) Found IP for machine: 192.168.61.216
	I0130 22:34:00.370710  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has current primary IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:00.370720  686843 main.go:141] libmachine: (auto-381927) Reserving static IP address...
	I0130 22:34:00.370986  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find host DHCP lease matching {name: "auto-381927", mac: "52:54:00:5f:62:b5", ip: "192.168.61.216"} in network mk-auto-381927
	I0130 22:34:00.445064  686843 main.go:141] libmachine: (auto-381927) DBG | Getting to WaitForSSH function...
	I0130 22:34:00.445102  686843 main.go:141] libmachine: (auto-381927) Reserved static IP address: 192.168.61.216
	I0130 22:34:00.445117  686843 main.go:141] libmachine: (auto-381927) Waiting for SSH to be available...
	I0130 22:34:00.447949  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:00.448414  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927
	I0130 22:34:00.448449  686843 main.go:141] libmachine: (auto-381927) DBG | unable to find defined IP address of network mk-auto-381927 interface with MAC address 52:54:00:5f:62:b5
	I0130 22:34:00.448554  686843 main.go:141] libmachine: (auto-381927) DBG | Using SSH client type: external
	I0130 22:34:00.448578  686843 main.go:141] libmachine: (auto-381927) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa (-rw-------)
	I0130 22:34:00.448621  686843 main.go:141] libmachine: (auto-381927) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:34:00.448644  686843 main.go:141] libmachine: (auto-381927) DBG | About to run SSH command:
	I0130 22:34:00.448664  686843 main.go:141] libmachine: (auto-381927) DBG | exit 0
	I0130 22:34:00.452088  686843 main.go:141] libmachine: (auto-381927) DBG | SSH cmd err, output: exit status 255: 
	I0130 22:34:00.452117  686843 main.go:141] libmachine: (auto-381927) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0130 22:34:00.452128  686843 main.go:141] libmachine: (auto-381927) DBG | command : exit 0
	I0130 22:34:00.452155  686843 main.go:141] libmachine: (auto-381927) DBG | err     : exit status 255
	I0130 22:34:00.452172  686843 main.go:141] libmachine: (auto-381927) DBG | output  : 
	I0130 22:34:05.110256  687229 start.go:369] acquired machines lock for "newest-cni-507807" in 13.825782995s
	I0130 22:34:05.110307  687229 start.go:96] Skipping create...Using existing machine configuration
	I0130 22:34:05.110320  687229 fix.go:54] fixHost starting: 
	I0130 22:34:05.110759  687229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 22:34:05.110815  687229 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 22:34:05.127756  687229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33579
	I0130 22:34:05.128235  687229 main.go:141] libmachine: () Calling .GetVersion
	I0130 22:34:05.128814  687229 main.go:141] libmachine: Using API Version  1
	I0130 22:34:05.128853  687229 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 22:34:05.129220  687229 main.go:141] libmachine: () Calling .GetMachineName
	I0130 22:34:05.129426  687229 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	I0130 22:34:05.129598  687229 main.go:141] libmachine: (newest-cni-507807) Calling .GetState
	I0130 22:34:05.131251  687229 fix.go:102] recreateIfNeeded on newest-cni-507807: state=Stopped err=<nil>
	I0130 22:34:05.131286  687229 main.go:141] libmachine: (newest-cni-507807) Calling .DriverName
	W0130 22:34:05.131451  687229 fix.go:128] unexpected machine state, will restart: <nil>
	I0130 22:34:05.133710  687229 out.go:177] * Restarting existing kvm2 VM for "newest-cni-507807" ...
	I0130 22:34:03.453717  686843 main.go:141] libmachine: (auto-381927) DBG | Getting to WaitForSSH function...
	I0130 22:34:03.455933  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.456385  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:03.456413  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.456546  686843 main.go:141] libmachine: (auto-381927) DBG | Using SSH client type: external
	I0130 22:34:03.456574  686843 main.go:141] libmachine: (auto-381927) DBG | Using SSH private key: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa (-rw-------)
	I0130 22:34:03.456611  686843 main.go:141] libmachine: (auto-381927) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.216 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0130 22:34:03.456625  686843 main.go:141] libmachine: (auto-381927) DBG | About to run SSH command:
	I0130 22:34:03.456643  686843 main.go:141] libmachine: (auto-381927) DBG | exit 0
	I0130 22:34:03.549105  686843 main.go:141] libmachine: (auto-381927) DBG | SSH cmd err, output: <nil>: 
	I0130 22:34:03.549395  686843 main.go:141] libmachine: (auto-381927) KVM machine creation complete!
	I0130 22:34:03.549768  686843 main.go:141] libmachine: (auto-381927) Calling .GetConfigRaw
	I0130 22:34:03.550439  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:03.550659  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:03.550796  686843 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0130 22:34:03.550814  686843 main.go:141] libmachine: (auto-381927) Calling .GetState
	I0130 22:34:03.552070  686843 main.go:141] libmachine: Detecting operating system of created instance...
	I0130 22:34:03.552086  686843 main.go:141] libmachine: Waiting for SSH to be available...
	I0130 22:34:03.552092  686843 main.go:141] libmachine: Getting to WaitForSSH function...
	I0130 22:34:03.552098  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:03.554285  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.554682  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:03.554726  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.554863  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:03.555013  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.555193  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.555296  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:03.555444  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:03.555908  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:03.555930  686843 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0130 22:34:03.680855  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:34:03.680893  686843 main.go:141] libmachine: Detecting the provisioner...
	I0130 22:34:03.680905  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:03.683663  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.684039  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:03.684080  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.684227  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:03.684454  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.684624  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.684779  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:03.684965  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:03.685299  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:03.685315  686843 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0130 22:34:03.810442  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g19d536a-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0130 22:34:03.810565  686843 main.go:141] libmachine: found compatible host: buildroot
	I0130 22:34:03.810576  686843 main.go:141] libmachine: Provisioning with buildroot...
	I0130 22:34:03.810585  686843 main.go:141] libmachine: (auto-381927) Calling .GetMachineName
	I0130 22:34:03.810857  686843 buildroot.go:166] provisioning hostname "auto-381927"
	I0130 22:34:03.810892  686843 main.go:141] libmachine: (auto-381927) Calling .GetMachineName
	I0130 22:34:03.811102  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:03.814139  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.814517  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:03.814546  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.814718  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:03.814892  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.815024  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.815120  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:03.815252  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:03.815568  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:03.815583  686843 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-381927 && echo "auto-381927" | sudo tee /etc/hostname
	I0130 22:34:03.953619  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-381927
	
	I0130 22:34:03.953658  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:03.956572  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.956925  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:03.956958  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:03.957156  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:03.957331  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.957457  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:03.957579  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:03.957698  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:03.958011  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:03.958028  686843 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-381927' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-381927/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-381927' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0130 22:34:04.091326  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0130 22:34:04.091363  686843 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18014-640473/.minikube CaCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18014-640473/.minikube}
	I0130 22:34:04.091430  686843 buildroot.go:174] setting up certificates
	I0130 22:34:04.091450  686843 provision.go:83] configureAuth start
	I0130 22:34:04.091465  686843 main.go:141] libmachine: (auto-381927) Calling .GetMachineName
	I0130 22:34:04.091793  686843 main.go:141] libmachine: (auto-381927) Calling .GetIP
	I0130 22:34:04.094440  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.094757  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.094779  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.095024  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:04.097189  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.097508  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.097537  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.097689  686843 provision.go:138] copyHostCerts
	I0130 22:34:04.097741  686843 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem, removing ...
	I0130 22:34:04.097752  686843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem
	I0130 22:34:04.097815  686843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/ca.pem (1078 bytes)
	I0130 22:34:04.097952  686843 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem, removing ...
	I0130 22:34:04.097966  686843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem
	I0130 22:34:04.098007  686843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/cert.pem (1123 bytes)
	I0130 22:34:04.098066  686843 exec_runner.go:144] found /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem, removing ...
	I0130 22:34:04.098074  686843 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem
	I0130 22:34:04.098098  686843 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18014-640473/.minikube/key.pem (1675 bytes)
	I0130 22:34:04.098143  686843 provision.go:112] generating server cert: /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem org=jenkins.auto-381927 san=[192.168.61.216 192.168.61.216 localhost 127.0.0.1 minikube auto-381927]
	I0130 22:34:04.322426  686843 provision.go:172] copyRemoteCerts
	I0130 22:34:04.322490  686843 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0130 22:34:04.322523  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:04.325259  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.325633  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.325659  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.325811  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:04.326022  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.326236  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:04.326392  686843 sshutil.go:53] new ssh client: &{IP:192.168.61.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa Username:docker}
	I0130 22:34:04.418537  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0130 22:34:04.441317  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0130 22:34:04.462031  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0130 22:34:04.483647  686843 provision.go:86] duration metric: configureAuth took 392.181385ms
	I0130 22:34:04.483672  686843 buildroot.go:189] setting minikube options for container-runtime
	I0130 22:34:04.483861  686843 config.go:182] Loaded profile config "auto-381927": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:34:04.483945  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:04.486683  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.487073  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.487112  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.487249  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:04.487476  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.487611  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.487750  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:04.487919  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:04.488288  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:04.488313  686843 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0130 22:34:04.836098  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0130 22:34:04.836134  686843 main.go:141] libmachine: Checking connection to Docker...
	I0130 22:34:04.836146  686843 main.go:141] libmachine: (auto-381927) Calling .GetURL
	I0130 22:34:04.837397  686843 main.go:141] libmachine: (auto-381927) DBG | Using libvirt version 6000000
	I0130 22:34:04.839686  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.840021  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.840053  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.840242  686843 main.go:141] libmachine: Docker is up and running!
	I0130 22:34:04.840265  686843 main.go:141] libmachine: Reticulating splines...
	I0130 22:34:04.840275  686843 client.go:171] LocalClient.Create took 28.627626941s
	I0130 22:34:04.840303  686843 start.go:167] duration metric: libmachine.API.Create for "auto-381927" took 28.627698259s
	I0130 22:34:04.840317  686843 start.go:300] post-start starting for "auto-381927" (driver="kvm2")
	I0130 22:34:04.840330  686843 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0130 22:34:04.840353  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:04.840575  686843 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0130 22:34:04.840602  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:04.842720  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.843051  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.843080  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.843186  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:04.843371  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.843519  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:04.843647  686843 sshutil.go:53] new ssh client: &{IP:192.168.61.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa Username:docker}
	I0130 22:34:04.940462  686843 ssh_runner.go:195] Run: cat /etc/os-release
	I0130 22:34:04.944910  686843 info.go:137] Remote host: Buildroot 2021.02.12
	I0130 22:34:04.944935  686843 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/addons for local assets ...
	I0130 22:34:04.945001  686843 filesync.go:126] Scanning /home/jenkins/minikube-integration/18014-640473/.minikube/files for local assets ...
	I0130 22:34:04.945098  686843 filesync.go:149] local asset: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem -> 6477182.pem in /etc/ssl/certs
	I0130 22:34:04.945206  686843 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0130 22:34:04.955190  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:34:04.977793  686843 start.go:303] post-start completed in 137.466289ms
	I0130 22:34:04.977840  686843 main.go:141] libmachine: (auto-381927) Calling .GetConfigRaw
	I0130 22:34:04.978381  686843 main.go:141] libmachine: (auto-381927) Calling .GetIP
	I0130 22:34:04.980911  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.981236  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.981258  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.981553  686843 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/config.json ...
	I0130 22:34:04.981713  686843 start.go:128] duration metric: createHost completed in 28.787413124s
	I0130 22:34:04.981734  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:04.983810  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.984140  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:04.984169  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:04.984276  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:04.984456  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.984612  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:04.984755  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:04.984909  686843 main.go:141] libmachine: Using SSH client type: native
	I0130 22:34:04.985219  686843 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80a920] 0x80d600 <nil>  [] 0s} 192.168.61.216 22 <nil> <nil>}
	I0130 22:34:04.985231  686843 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0130 22:34:05.110058  686843 main.go:141] libmachine: SSH cmd err, output: <nil>: 1706654045.097747968
	
	I0130 22:34:05.110086  686843 fix.go:206] guest clock: 1706654045.097747968
	I0130 22:34:05.110094  686843 fix.go:219] Guest: 2024-01-30 22:34:05.097747968 +0000 UTC Remote: 2024-01-30 22:34:04.981725525 +0000 UTC m=+28.918714120 (delta=116.022443ms)
	I0130 22:34:05.110133  686843 fix.go:190] guest clock delta is within tolerance: 116.022443ms
	I0130 22:34:05.110138  686843 start.go:83] releasing machines lock for "auto-381927", held for 28.91595869s
	I0130 22:34:05.110164  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:05.110474  686843 main.go:141] libmachine: (auto-381927) Calling .GetIP
	I0130 22:34:05.113001  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.113341  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:05.113390  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.113528  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:05.113992  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:05.114177  686843 main.go:141] libmachine: (auto-381927) Calling .DriverName
	I0130 22:34:05.114252  686843 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0130 22:34:05.114322  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:05.114419  686843 ssh_runner.go:195] Run: cat /version.json
	I0130 22:34:05.114440  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHHostname
	I0130 22:34:05.117434  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.117769  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.117828  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:05.117858  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.118016  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:05.118206  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:05.118283  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:05.118303  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:05.118347  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:05.118490  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHPort
	I0130 22:34:05.118561  686843 sshutil.go:53] new ssh client: &{IP:192.168.61.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa Username:docker}
	I0130 22:34:05.118639  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHKeyPath
	I0130 22:34:05.118790  686843 main.go:141] libmachine: (auto-381927) Calling .GetSSHUsername
	I0130 22:34:05.118891  686843 sshutil.go:53] new ssh client: &{IP:192.168.61.216 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/auto-381927/id_rsa Username:docker}
	I0130 22:34:05.210022  686843 ssh_runner.go:195] Run: systemctl --version
	I0130 22:34:05.236949  686843 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0130 22:34:05.391027  686843 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0130 22:34:05.397794  686843 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0130 22:34:05.397872  686843 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0130 22:34:05.413220  686843 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0130 22:34:05.413241  686843 start.go:475] detecting cgroup driver to use...
	I0130 22:34:05.413314  686843 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0130 22:34:05.426542  686843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0130 22:34:05.437327  686843 docker.go:217] disabling cri-docker service (if available) ...
	I0130 22:34:05.437376  686843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0130 22:34:05.448888  686843 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0130 22:34:05.461761  686843 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0130 22:34:05.579875  686843 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0130 22:34:05.718139  686843 docker.go:233] disabling docker service ...
	I0130 22:34:05.718209  686843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0130 22:34:05.731646  686843 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0130 22:34:05.743156  686843 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0130 22:34:05.866021  686843 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0130 22:34:05.981934  686843 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0130 22:34:05.994654  686843 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0130 22:34:06.011896  686843 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0130 22:34:06.011957  686843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:34:06.021910  686843 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0130 22:34:06.021971  686843 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:34:06.031865  686843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:34:06.041110  686843 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0130 22:34:06.053049  686843 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0130 22:34:06.064755  686843 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0130 22:34:06.074301  686843 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0130 22:34:06.074347  686843 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0130 22:34:06.091583  686843 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0130 22:34:06.101973  686843 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0130 22:34:05.135269  687229 main.go:141] libmachine: (newest-cni-507807) Calling .Start
	I0130 22:34:05.135433  687229 main.go:141] libmachine: (newest-cni-507807) Ensuring networks are active...
	I0130 22:34:05.136293  687229 main.go:141] libmachine: (newest-cni-507807) Ensuring network default is active
	I0130 22:34:05.136824  687229 main.go:141] libmachine: (newest-cni-507807) Ensuring network mk-newest-cni-507807 is active
	I0130 22:34:05.137297  687229 main.go:141] libmachine: (newest-cni-507807) Getting domain xml...
	I0130 22:34:05.138081  687229 main.go:141] libmachine: (newest-cni-507807) Creating domain...
	I0130 22:34:06.200046  686843 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0130 22:34:06.384692  686843 start.go:522] Will wait 60s for socket path /var/run/crio/crio.sock
	I0130 22:34:06.384761  686843 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0130 22:34:06.391338  686843 start.go:543] Will wait 60s for crictl version
	I0130 22:34:06.391400  686843 ssh_runner.go:195] Run: which crictl
	I0130 22:34:06.395864  686843 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0130 22:34:06.435044  686843 start.go:559] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1
	I0130 22:34:06.435132  686843 ssh_runner.go:195] Run: crio --version
	I0130 22:34:06.485537  686843 ssh_runner.go:195] Run: crio --version
	I0130 22:34:06.536311  686843 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.24.1 ...
	I0130 22:34:06.537604  686843 main.go:141] libmachine: (auto-381927) Calling .GetIP
	I0130 22:34:06.540621  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:06.540963  686843 main.go:141] libmachine: (auto-381927) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5f:62:b5", ip: ""} in network mk-auto-381927: {Iface:virbr1 ExpiryTime:2024-01-30 23:33:52 +0000 UTC Type:0 Mac:52:54:00:5f:62:b5 Iaid: IPaddr:192.168.61.216 Prefix:24 Hostname:auto-381927 Clientid:01:52:54:00:5f:62:b5}
	I0130 22:34:06.540986  686843 main.go:141] libmachine: (auto-381927) DBG | domain auto-381927 has defined IP address 192.168.61.216 and MAC address 52:54:00:5f:62:b5 in network mk-auto-381927
	I0130 22:34:06.541244  686843 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0130 22:34:06.545370  686843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:34:06.557367  686843 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 22:34:06.557422  686843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:34:06.595260  686843 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0130 22:34:06.595361  686843 ssh_runner.go:195] Run: which lz4
	I0130 22:34:06.599247  686843 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0130 22:34:06.603359  686843 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0130 22:34:06.603393  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0130 22:34:08.517733  686843 crio.go:444] Took 1.918514 seconds to copy over tarball
	I0130 22:34:08.517834  686843 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0130 22:34:06.370076  687229 main.go:141] libmachine: (newest-cni-507807) Waiting to get IP...
	I0130 22:34:06.370941  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:06.371350  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:06.371448  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:06.371333  687459 retry.go:31] will retry after 273.744006ms: waiting for machine to come up
	I0130 22:34:06.647053  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:06.647630  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:06.647666  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:06.647593  687459 retry.go:31] will retry after 387.402632ms: waiting for machine to come up
	I0130 22:34:07.037417  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:07.038087  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:07.038116  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:07.037995  687459 retry.go:31] will retry after 474.758069ms: waiting for machine to come up
	I0130 22:34:07.514811  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:07.515363  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:07.515397  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:07.515318  687459 retry.go:31] will retry after 605.509341ms: waiting for machine to come up
	I0130 22:34:08.122293  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:08.122888  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:08.122916  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:08.122837  687459 retry.go:31] will retry after 480.037625ms: waiting for machine to come up
	I0130 22:34:08.604670  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:08.605147  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:08.605180  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:08.605086  687459 retry.go:31] will retry after 682.226851ms: waiting for machine to come up
	I0130 22:34:09.288920  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:09.289417  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:09.289479  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:09.289368  687459 retry.go:31] will retry after 821.888724ms: waiting for machine to come up
	I0130 22:34:10.112609  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:10.113226  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:10.113254  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:10.113195  687459 retry.go:31] will retry after 1.099512378s: waiting for machine to come up
	I0130 22:34:11.780279  686843 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.262413426s)
	I0130 22:34:11.780320  686843 crio.go:451] Took 3.262557 seconds to extract the tarball
	I0130 22:34:11.780330  686843 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0130 22:34:11.831111  686843 ssh_runner.go:195] Run: sudo crictl images --output json
	I0130 22:34:11.900724  686843 crio.go:496] all images are preloaded for cri-o runtime.
	I0130 22:34:11.900751  686843 cache_images.go:84] Images are preloaded, skipping loading
	I0130 22:34:11.900844  686843 ssh_runner.go:195] Run: crio config
	I0130 22:34:11.963587  686843 cni.go:84] Creating CNI manager for ""
	I0130 22:34:11.963615  686843 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 22:34:11.963641  686843 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0130 22:34:11.963667  686843 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.216 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-381927 NodeName:auto-381927 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.216"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.216 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0130 22:34:11.963872  686843 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.216
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-381927"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.216
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.216"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0130 22:34:11.963967  686843 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-381927 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.216
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:auto-381927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0130 22:34:11.964057  686843 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0130 22:34:11.973609  686843 binaries.go:44] Found k8s binaries, skipping transfer
	I0130 22:34:11.973746  686843 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0130 22:34:11.983235  686843 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0130 22:34:12.002818  686843 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0130 22:34:12.022290  686843 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0130 22:34:12.041134  686843 ssh_runner.go:195] Run: grep 192.168.61.216	control-plane.minikube.internal$ /etc/hosts
	I0130 22:34:12.044973  686843 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.216	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0130 22:34:12.056933  686843 certs.go:56] Setting up /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927 for IP: 192.168.61.216
	I0130 22:34:12.056962  686843 certs.go:190] acquiring lock for shared ca certs: {Name:mk706f0de9ba9848655d53c4eb01bdc5e2b54f46 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.057150  686843 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key
	I0130 22:34:12.057210  686843 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key
	I0130 22:34:12.057276  686843 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.key
	I0130 22:34:12.057295  686843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.crt with IP's: []
	I0130 22:34:12.145174  686843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.crt ...
	I0130 22:34:12.145211  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.crt: {Name:mkdfefb6101b8db55fc94482c0e4717078ed4f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.145385  686843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.key ...
	I0130 22:34:12.145396  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/client.key: {Name:mk6e64e36ddfd671f25942baf2525c1a6f2405fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.145502  686843 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key.c1c70c45
	I0130 22:34:12.145523  686843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt.c1c70c45 with IP's: [192.168.61.216 10.96.0.1 127.0.0.1 10.0.0.1]
	I0130 22:34:12.197895  686843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt.c1c70c45 ...
	I0130 22:34:12.197934  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt.c1c70c45: {Name:mk5467f30f8f9252000152a694416970c69c28bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.198155  686843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key.c1c70c45 ...
	I0130 22:34:12.198182  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key.c1c70c45: {Name:mkf585caa73237631e3f1959e1960701b6e6d1e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.198301  686843 certs.go:337] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt.c1c70c45 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt
	I0130 22:34:12.198410  686843 certs.go:341] copying /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key.c1c70c45 -> /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key
	I0130 22:34:12.198473  686843 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.key
	I0130 22:34:12.198490  686843 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.crt with IP's: []
	I0130 22:34:12.323244  686843 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.crt ...
	I0130 22:34:12.323284  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.crt: {Name:mk31e2da39003172a825b6918f75bb34d8cdf54d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.323468  686843 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.key ...
	I0130 22:34:12.323480  686843 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.key: {Name:mkf4b848d21707bf2701c86d3e06889d2cfdb9c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 22:34:12.323709  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem (1338 bytes)
	W0130 22:34:12.323759  686843 certs.go:433] ignoring /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718_empty.pem, impossibly tiny 0 bytes
	I0130 22:34:12.323784  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca-key.pem (1679 bytes)
	I0130 22:34:12.323815  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/ca.pem (1078 bytes)
	I0130 22:34:12.323868  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/cert.pem (1123 bytes)
	I0130 22:34:12.323918  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/certs/home/jenkins/minikube-integration/18014-640473/.minikube/certs/key.pem (1675 bytes)
	I0130 22:34:12.323983  686843 certs.go:437] found cert: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem (1708 bytes)
	I0130 22:34:12.324693  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0130 22:34:12.351471  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0130 22:34:12.376655  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0130 22:34:12.400925  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/auto-381927/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0130 22:34:12.513860  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0130 22:34:12.541000  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0130 22:34:12.568872  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0130 22:34:12.594523  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0130 22:34:12.618631  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0130 22:34:12.642299  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/certs/647718.pem --> /usr/share/ca-certificates/647718.pem (1338 bytes)
	I0130 22:34:12.666452  686843 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/ssl/certs/6477182.pem --> /usr/share/ca-certificates/6477182.pem (1708 bytes)
	I0130 22:34:12.691083  686843 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0130 22:34:12.708824  686843 ssh_runner.go:195] Run: openssl version
	I0130 22:34:12.714611  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/647718.pem && ln -fs /usr/share/ca-certificates/647718.pem /etc/ssl/certs/647718.pem"
	I0130 22:34:12.725298  686843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/647718.pem
	I0130 22:34:12.730232  686843 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Jan 30 21:11 /usr/share/ca-certificates/647718.pem
	I0130 22:34:12.730314  686843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/647718.pem
	I0130 22:34:12.736252  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/647718.pem /etc/ssl/certs/51391683.0"
	I0130 22:34:12.749812  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/6477182.pem && ln -fs /usr/share/ca-certificates/6477182.pem /etc/ssl/certs/6477182.pem"
	I0130 22:34:12.762015  686843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/6477182.pem
	I0130 22:34:12.767113  686843 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Jan 30 21:11 /usr/share/ca-certificates/6477182.pem
	I0130 22:34:12.767169  686843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/6477182.pem
	I0130 22:34:12.772980  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/6477182.pem /etc/ssl/certs/3ec20f2e.0"
	I0130 22:34:12.785722  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0130 22:34:12.797615  686843 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:34:12.802768  686843 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Jan 30 21:01 /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:34:12.802844  686843 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0130 22:34:12.808703  686843 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0130 22:34:12.818543  686843 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0130 22:34:12.822953  686843 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0130 22:34:12.823019  686843 kubeadm.go:404] StartCluster: {Name:auto-381927 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.4 ClusterName:auto-381927 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.216 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 22:34:12.823168  686843 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0130 22:34:12.823238  686843 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0130 22:34:12.870777  686843 cri.go:89] found id: ""
	I0130 22:34:12.870869  686843 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0130 22:34:12.879753  686843 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0130 22:34:12.888059  686843 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0130 22:34:12.896546  686843 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0130 22:34:12.896594  686843 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0130 22:34:12.952809  686843 kubeadm.go:322] [init] Using Kubernetes version: v1.28.4
	I0130 22:34:12.952892  686843 kubeadm.go:322] [preflight] Running pre-flight checks
	I0130 22:34:13.099323  686843 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0130 22:34:13.099572  686843 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0130 22:34:13.099726  686843 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0130 22:34:13.341374  686843 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0130 22:34:13.479605  686843 out.go:204]   - Generating certificates and keys ...
	I0130 22:34:13.479738  686843 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0130 22:34:13.479863  686843 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0130 22:34:13.479962  686843 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0130 22:34:13.619608  686843 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0130 22:34:13.736209  686843 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0130 22:34:13.953166  686843 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0130 22:34:14.156715  686843 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0130 22:34:14.156870  686843 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-381927 localhost] and IPs [192.168.61.216 127.0.0.1 ::1]
	I0130 22:34:14.255492  686843 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0130 22:34:14.255825  686843 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-381927 localhost] and IPs [192.168.61.216 127.0.0.1 ::1]
	I0130 22:34:14.307375  686843 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0130 22:34:14.468594  686843 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0130 22:34:14.778543  686843 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0130 22:34:14.778941  686843 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0130 22:34:14.884399  686843 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0130 22:34:15.010284  686843 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0130 22:34:15.102154  686843 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0130 22:34:15.270095  686843 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0130 22:34:15.270662  686843 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0130 22:34:15.273217  686843 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0130 22:34:15.275072  686843 out.go:204]   - Booting up control plane ...
	I0130 22:34:15.275184  686843 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0130 22:34:15.275297  686843 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0130 22:34:15.277596  686843 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0130 22:34:15.294207  686843 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0130 22:34:15.295401  686843 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0130 22:34:15.295560  686843 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0130 22:34:15.440276  686843 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0130 22:34:11.214645  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:11.215125  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:11.215177  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:11.215115  687459 retry.go:31] will retry after 1.270278793s: waiting for machine to come up
	I0130 22:34:12.486778  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:12.487294  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:12.487327  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:12.487237  687459 retry.go:31] will retry after 2.064259074s: waiting for machine to come up
	I0130 22:34:14.553544  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:14.554220  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:14.554257  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:14.554160  687459 retry.go:31] will retry after 2.15315057s: waiting for machine to come up
	I0130 22:34:16.709130  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:16.709675  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:16.709705  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:16.709641  687459 retry.go:31] will retry after 2.88432982s: waiting for machine to come up
	I0130 22:34:19.596709  687229 main.go:141] libmachine: (newest-cni-507807) DBG | domain newest-cni-507807 has defined MAC address 52:54:00:65:8c:48 in network mk-newest-cni-507807
	I0130 22:34:19.597181  687229 main.go:141] libmachine: (newest-cni-507807) DBG | unable to find current IP address of domain newest-cni-507807 in network mk-newest-cni-507807
	I0130 22:34:19.597218  687229 main.go:141] libmachine: (newest-cni-507807) DBG | I0130 22:34:19.597133  687459 retry.go:31] will retry after 3.695231074s: waiting for machine to come up
	
	
	==> CRI-O <==
	-- Journal begins at Tue 2024-01-30 22:14:00 UTC, ends at Tue 2024-01-30 22:34:23 UTC. --
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.161549185Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654063161461226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=319defb1-51d7-4646-bca3-781431605651 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.162585167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=691dab1b-153b-4939-9899-74b75c763654 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.162667042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=691dab1b-153b-4939-9899-74b75c763654 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.162823297Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=691dab1b-153b-4939-9899-74b75c763654 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.205163782Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=0a777234-a3b3-498d-bb5c-6c169d9f5a3b name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.205248883Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=0a777234-a3b3-498d-bb5c-6c169d9f5a3b name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.206615405Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=fa5a29ae-450a-4e55-82f0-a42b318fbeb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.207124639Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654063207109205,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=fa5a29ae-450a-4e55-82f0-a42b318fbeb1 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.207801073Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fcd54e7-1dd3-4c36-bd5d-03e1ed50cc25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.207928512Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fcd54e7-1dd3-4c36-bd5d-03e1ed50cc25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.208096870Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fcd54e7-1dd3-4c36-bd5d-03e1ed50cc25 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.249075780Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=e3dc8676-014e-4c28-86a5-51404e9e1fa2 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.249157021Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=e3dc8676-014e-4c28-86a5-51404e9e1fa2 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.250864217Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=9a423965-04c6-46d6-aab6-7a4f0e3db323 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.251276854Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654063251265250,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=9a423965-04c6-46d6-aab6-7a4f0e3db323 name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.252057119Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9b9e4d7c-36da-4f0e-b5f5-00e4ccf08164 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.252127535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9b9e4d7c-36da-4f0e-b5f5-00e4ccf08164 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.252296442Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9b9e4d7c-36da-4f0e-b5f5-00e4ccf08164 name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.286609279Z" level=debug msg="Request: &VersionRequest{Version:,}" file="go-grpc-middleware/chain.go:25" id=a4ad556e-386f-4ef0-bc65-0233183975b2 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.286664008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=a4ad556e-386f-4ef0-bc65-0233183975b2 name=/runtime.v1.RuntimeService/Version
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.288291491Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="go-grpc-middleware/chain.go:25" id=d4c2227d-9870-45e1-9e90-2f45425bd33b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.288685504Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1706654063288671339,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125561,},InodesUsed:&UInt64Value{Value:63,},},},}" file="go-grpc-middleware/chain.go:25" id=d4c2227d-9870-45e1-9e90-2f45425bd33b name=/runtime.v1.ImageService/ImageFsInfo
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.289383454Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5ebf61c2-3185-4473-b461-ec0848f1bcfa name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.289462661Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5ebf61c2-3185-4473-b461-ec0848f1bcfa name=/runtime.v1.RuntimeService/ListContainers
	Jan 30 22:34:23 default-k8s-diff-port-850803 crio[713]: time="2024-01-30 22:34:23.289640132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52,PodSandboxId:932762720b948b20956bde944e7602144cf12235ff1736150bedcb7537a8097b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1706653174961385335,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a46524c4-645e-4d7e-b0f6-00e4a05f340c,},Annotations:map[string]string{io.kubernetes.container.hash: ab235579,io.kubernetes.container.restartCount
: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0,PodSandboxId:04bd58280675228df393a8fe0f6c886091d899f38e113e953313a945dd704938,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e,State:CONTAINER_RUNNING,CreatedAt:1706653174804745394,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9b97q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b8b32be2-d1fd-4800-b4a4-3db0a23e97f1,},Annotations:map[string]string{io.kubernetes.container.hash: 7e2fe228,io.kubernetes.container.restartCount: 0,io.kubernetes.container.termina
tionMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb,PodSandboxId:94b66e79a14ff255230037a62ed5a4d3ab236bded04db8eabcee6c3a48243ef8,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1706653174175999029,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z27l8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1ff9627e-373c-45d3-87dc-281daaf057e1,},Annotations:map[string]string{io.kubernetes.container.hash: a9939c99,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"nam
e\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7,PodSandboxId:8af4e703fb33d7587d7075bf3f4511d61be7076ae3a6c42804d9ef5624cc40ff,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1706653150757792056,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f71def90bf5a9f34c
0500e71bd6a4621,},Annotations:map[string]string{io.kubernetes.container.hash: de6102d7,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260,PodSandboxId:7a83056fd8c586168e8c4f0e02617f27e281f131e6afe42c61465ee78e20b36c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba,State:CONTAINER_RUNNING,CreatedAt:1706653150788561542,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3c2f04febdd9de3
3dea54c840c5a00,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143,PodSandboxId:c73314d7536a8999232d88a918d5a57c884372ad2da63dd9376dd39613d23770,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499,State:CONTAINER_RUNNING,CreatedAt:1706653150474349597,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ebeb9b7df96a00f8c
494e0b698699ac4,},Annotations:map[string]string{io.kubernetes.container.hash: 3837d5e4,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288,PodSandboxId:e6255630721a23386222ebba8462182c9003cf23b86ff9a963a1c82b005d08c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c,State:CONTAINER_RUNNING,CreatedAt:1706653150164589054,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-850803,io.kubernetes.pod.namespace: kube-system,io.k
ubernetes.pod.uid: 4fc05a6c7fbc28aecc48825551d36dac,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5ebf61c2-3185-4473-b461-ec0848f1bcfa name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	43da5b55fb482       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   932762720b948       storage-provisioner
	39c79e5bf1f78       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   14 minutes ago      Running             kube-proxy                0                   04bd582806752       kube-proxy-9b97q
	226d3c6d1fe8c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   14 minutes ago      Running             coredns                   0                   94b66e79a14ff       coredns-5dd5756b68-z27l8
	c65c8f7f27cef       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   15 minutes ago      Running             kube-scheduler            2                   7a83056fd8c58       kube-scheduler-default-k8s-diff-port-850803
	1ae8e1a1886b9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   8af4e703fb33d       etcd-default-k8s-diff-port-850803
	a6dda49131d42       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   15 minutes ago      Running             kube-apiserver            2                   c73314d7536a8       kube-apiserver-default-k8s-diff-port-850803
	bdf2eff0e83f3       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   15 minutes ago      Running             kube-controller-manager   2                   e6255630721a2       kube-controller-manager-default-k8s-diff-port-850803
	
	
	==> coredns [226d3c6d1fe8cd8303b40f3f51e47ae039016aad8e21753a8b409cb3ce332cfb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-850803
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-850803
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=ee797c3c8a930c6d412d0b471af21f4da96305b5
	                    minikube.k8s.io/name=default-k8s-diff-port-850803
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_01_30T22_19_18_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 30 Jan 2024 22:19:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-850803
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 30 Jan 2024 22:34:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 30 Jan 2024 22:29:50 +0000   Tue, 30 Jan 2024 22:19:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.254
	  Hostname:    default-k8s-diff-port-850803
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 c5243e2b5c284dc1ad35b1a6be575851
	  System UUID:                c5243e2b-5c28-4dc1-ad35-b1a6be575851
	  Boot ID:                    ceabb56f-f95f-4d19-af00-af634aeedb28
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-z27l8                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-default-k8s-diff-port-850803                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-default-k8s-diff-port-850803             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-850803    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-9b97q                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-default-k8s-diff-port-850803             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-nkcv4                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                14m   kubelet          Node default-k8s-diff-port-850803 status is now: NodeReady
	  Normal  RegisteredNode           14m   node-controller  Node default-k8s-diff-port-850803 event: Registered Node default-k8s-diff-port-850803 in Controller
	
	
	==> dmesg <==
	[Jan30 22:13] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.081397] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.546455] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.305559] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[Jan30 22:14] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.506124] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.865067] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.100132] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.145136] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.128300] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.259499] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +17.877134] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[ +21.308001] kauditd_printk_skb: 29 callbacks suppressed
	[Jan30 22:19] systemd-fstab-generator[3503]: Ignoring "noauto" for root device
	[  +9.278029] systemd-fstab-generator[3832]: Ignoring "noauto" for root device
	[ +15.014166] kauditd_printk_skb: 2 callbacks suppressed
	
	
	==> etcd [1ae8e1a1886b9f9bf563db9bafe79638801deb595a99da42d1a4a83aec4d68f7] <==
	{"level":"info","ts":"2024-01-30T22:19:12.838312Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:19:12.839546Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-01-30T22:19:12.84118Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-01-30T22:19:12.843755Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"a0c94ab6025ee16","local-member-id":"c47571729f78ba63","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.844023Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.844094Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-01-30T22:19:12.853974Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-01-30T22:19:12.854103Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-01-30T22:19:12.874481Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.254:2379"}
	{"level":"info","ts":"2024-01-30T22:29:13.279461Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-01-30T22:29:13.282984Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"2.660988ms","hash":529779308}
	{"level":"info","ts":"2024-01-30T22:29:13.283075Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":529779308,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2024-01-30T22:33:19.984418Z","caller":"traceutil/trace.go:171","msg":"trace[134309230] transaction","detail":"{read_only:false; response_revision:1121; number_of_response:1; }","duration":"156.069382ms","start":"2024-01-30T22:33:19.828305Z","end":"2024-01-30T22:33:19.984374Z","steps":["trace[134309230] 'process raft request'  (duration: 155.873821ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T22:33:20.26434Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"124.443915ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-01-30T22:33:20.26446Z","caller":"traceutil/trace.go:171","msg":"trace[1340551884] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1121; }","duration":"124.667228ms","start":"2024-01-30T22:33:20.139776Z","end":"2024-01-30T22:33:20.264443Z","steps":["trace[1340551884] 'range keys from in-memory index tree'  (duration: 124.364902ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T22:34:12.106306Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"158.644624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/\" range_end:\"/registry/events0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-01-30T22:34:12.107099Z","caller":"traceutil/trace.go:171","msg":"trace[997346512] range","detail":"{range_begin:/registry/events/; range_end:/registry/events0; response_count:0; response_revision:1161; }","duration":"159.567807ms","start":"2024-01-30T22:34:11.947506Z","end":"2024-01-30T22:34:12.107074Z","steps":["trace[997346512] 'count revisions from in-memory index tree'  (duration: 158.472447ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T22:34:12.639204Z","caller":"traceutil/trace.go:171","msg":"trace[853378413] transaction","detail":"{read_only:false; response_revision:1162; number_of_response:1; }","duration":"321.579415ms","start":"2024-01-30T22:34:12.317603Z","end":"2024-01-30T22:34:12.639183Z","steps":["trace[853378413] 'process raft request'  (duration: 321.377328ms)"],"step_count":1}
	{"level":"warn","ts":"2024-01-30T22:34:12.640353Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-01-30T22:34:12.317583Z","time spent":"321.72796ms","remote":"127.0.0.1:48536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1113,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1161 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1040 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-01-30T22:34:13.450282Z","caller":"traceutil/trace.go:171","msg":"trace[1123588673] transaction","detail":"{read_only:false; response_revision:1164; number_of_response:1; }","duration":"168.70147ms","start":"2024-01-30T22:34:13.281557Z","end":"2024-01-30T22:34:13.450259Z","steps":["trace[1123588673] 'process raft request'  (duration: 134.612475ms)","trace[1123588673] 'compare'  (duration: 33.973539ms)"],"step_count":2}
	{"level":"info","ts":"2024-01-30T22:34:13.614606Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"warn","ts":"2024-01-30T22:34:13.615271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"144.110594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13430733941979319185 username:\"kube-apiserver-etcd-client\" auth_revision:1 > compaction:<revision:920 > ","response":"size:5"}
	{"level":"info","ts":"2024-01-30T22:34:13.615395Z","caller":"traceutil/trace.go:171","msg":"trace[1446992744] compact","detail":"{revision:920; response_revision:1164; }","duration":"164.178628ms","start":"2024-01-30T22:34:13.451192Z","end":"2024-01-30T22:34:13.615371Z","steps":["trace[1446992744] 'check and update compact revision'  (duration: 144.002415ms)"],"step_count":1}
	{"level":"info","ts":"2024-01-30T22:34:13.617308Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":920,"took":"2.236077ms","hash":2575168485}
	{"level":"info","ts":"2024-01-30T22:34:13.617392Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2575168485,"revision":920,"compact-revision":677}
	
	
	==> kernel <==
	 22:34:23 up 20 min,  0 users,  load average: 0.13, 0.13, 0.16
	Linux default-k8s-diff-port-850803 5.10.57 #1 SMP Thu Dec 28 22:04:21 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	
	==> kube-apiserver [a6dda49131d4212083afe90b3914b7eab274ed5c24d1e91519e59714385f6143] <==
	E0130 22:30:15.737250       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:30:15.738525       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:31:14.600133       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:32:14.599990       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:32:15.737818       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:32:15.737947       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:32:15.737958       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:32:15.739172       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:32:15.739303       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:32:15.739348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0130 22:33:14.599112       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0130 22:34:14.599202       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:34:14.743219       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:34:14.743398       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:34:14.744002       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0130 22:34:15.743554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:34:15.743628       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0130 22:34:15.743639       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0130 22:34:15.743570       1 handler_proxy.go:93] no RequestInfo found in the context
	E0130 22:34:15.743738       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0130 22:34:15.744947       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [bdf2eff0e83f3292dcdfd56bda467bce532864ea8ad7e3a9aff42b3675ae5288] <==
	I0130 22:28:31.613387       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:01.150758       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:01.623566       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:29:31.157297       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:29:31.632760       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:01.162768       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:01.648277       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:30:31.170321       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:30:31.661115       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0130 22:30:34.406961       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="260.77µs"
	I0130 22:30:45.403739       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="122.783µs"
	E0130 22:31:01.176151       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:01.670308       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:31:31.182265       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:31:31.680717       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:01.187313       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:01.688528       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:32:31.195547       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:32:31.700254       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:01.202117       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:01.710831       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:33:31.209171       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:33:31.722138       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0130 22:34:01.215566       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0130 22:34:01.730970       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [39c79e5bf1f7877df80b9b00c6b19ce1d17e16c2aeccde202e5e950fb7a972f0] <==
	I0130 22:19:35.248848       1 server_others.go:69] "Using iptables proxy"
	I0130 22:19:35.265657       1 node.go:141] Successfully retrieved node IP: 192.168.50.254
	I0130 22:19:35.308666       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0130 22:19:35.308728       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0130 22:19:35.312329       1 server_others.go:152] "Using iptables Proxier"
	I0130 22:19:35.313106       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0130 22:19:35.314252       1 server.go:846] "Version info" version="v1.28.4"
	I0130 22:19:35.314356       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0130 22:19:35.316111       1 config.go:188] "Starting service config controller"
	I0130 22:19:35.316761       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0130 22:19:35.316980       1 config.go:97] "Starting endpoint slice config controller"
	I0130 22:19:35.317145       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0130 22:19:35.319548       1 config.go:315] "Starting node config controller"
	I0130 22:19:35.319686       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0130 22:19:35.417519       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0130 22:19:35.417566       1 shared_informer.go:318] Caches are synced for service config
	I0130 22:19:35.419874       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [c65c8f7f27cefa125b8adc87a22b2db3f59993f0d79891fd55b4b3f631efd260] <==
	W0130 22:19:14.760610       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:19:14.760618       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:19:14.762296       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:19:14.762343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0130 22:19:15.600160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:15.600263       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:15.621628       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0130 22:19:15.621700       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0130 22:19:15.650773       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0130 22:19:15.650826       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0130 22:19:15.699379       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0130 22:19:15.699451       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0130 22:19:15.772354       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0130 22:19:15.772406       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0130 22:19:15.791274       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0130 22:19:15.791325       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0130 22:19:15.802102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:15.802424       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:15.840266       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0130 22:19:15.840538       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0130 22:19:16.018790       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0130 22:19:16.018843       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0130 22:19:16.034226       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0130 22:19:16.034275       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0130 22:19:18.351464       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	-- Journal begins at Tue 2024-01-30 22:14:00 UTC, ends at Tue 2024-01-30 22:34:23 UTC. --
	Jan 30 22:31:56 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:31:56.378105    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:09 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:09.377697    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:18.496445    3839 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:32:18 default-k8s-diff-port-850803 kubelet[3839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:32:21 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:21.378205    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:32 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:32.381282    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:47 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:47.377672    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:32:59 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:32:59.378545    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:33:12 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:12.378590    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:33:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:18.495632    3839 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:33:18 default-k8s-diff-port-850803 kubelet[3839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:33:18 default-k8s-diff-port-850803 kubelet[3839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:33:18 default-k8s-diff-port-850803 kubelet[3839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:33:27 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:27.377189    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:33:39 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:39.378635    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:33:52 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:33:52.379841    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:34:03 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:34:03.377508    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:34:18.378629    3839 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-nkcv4" podUID="8ff91827-4613-4a66-963b-9bec1c1493bc"
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:34:18.497116    3839 iptables.go:575] "Could not set up iptables canary" err=<
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jan 30 22:34:18 default-k8s-diff-port-850803 kubelet[3839]: E0130 22:34:18.529845    3839 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	
	
	==> storage-provisioner [43da5b55fb482df7621e71cca1953871054aab9cad7aeb08c4a3033902e2fd52] <==
	I0130 22:19:35.126322       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0130 22:19:35.139705       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0130 22:19:35.140049       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0130 22:19:35.152767       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0130 22:19:35.153185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70!
	I0130 22:19:35.156644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6af21fb0-2e65-4b5d-80c3-01a42f661b1d", APIVersion:"v1", ResourceVersion:"416", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70 became leader
	I0130 22:19:35.253993       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-850803_bfa7c95b-822b-4c7d-bda7-74e8bf4d2e70!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-nkcv4
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4: exit status 1 (64.268148ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-nkcv4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-850803 describe pod metrics-server-57f55c9bc5-nkcv4: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (71.21s)

                                                
                                    

Test pass (242/310)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 8.36
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.08
9 TestDownloadOnly/v1.16.0/DeleteAll 0.16
10 TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 5.33
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.16
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 4.47
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.15
30 TestBinaryMirror 0.6
31 TestOffline 67.3
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
36 TestAddons/Setup 216.05
38 TestAddons/parallel/Registry 18.44
40 TestAddons/parallel/InspektorGadget 10.97
41 TestAddons/parallel/MetricsServer 6.59
42 TestAddons/parallel/HelmTiller 13.34
44 TestAddons/parallel/CSI 46.91
45 TestAddons/parallel/Headlamp 16.2
46 TestAddons/parallel/CloudSpanner 6.63
47 TestAddons/parallel/LocalPath 60.96
48 TestAddons/parallel/NvidiaDevicePlugin 5.61
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 83.11
55 TestCertExpiration 365.23
57 TestForceSystemdFlag 55.06
58 TestForceSystemdEnv 75.97
60 TestKVMDriverInstallOrUpdate 3.36
64 TestErrorSpam/setup 46.84
65 TestErrorSpam/start 0.42
66 TestErrorSpam/status 0.85
67 TestErrorSpam/pause 1.63
68 TestErrorSpam/unpause 1.81
69 TestErrorSpam/stop 2.3
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 99.89
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 37.72
76 TestFunctional/serial/KubeContext 0.05
77 TestFunctional/serial/KubectlGetPods 0.08
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.38
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
83 TestFunctional/serial/CacheCmd/cache/list 0.07
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.74
86 TestFunctional/serial/CacheCmd/cache/delete 0.13
87 TestFunctional/serial/MinikubeKubectlCmd 0.13
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 38.47
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.57
92 TestFunctional/serial/LogsFileCmd 1.57
93 TestFunctional/serial/InvalidService 4.38
95 TestFunctional/parallel/ConfigCmd 0.48
96 TestFunctional/parallel/DashboardCmd 17.16
97 TestFunctional/parallel/DryRun 0.34
98 TestFunctional/parallel/InternationalLanguage 0.16
99 TestFunctional/parallel/StatusCmd 1.03
103 TestFunctional/parallel/ServiceCmdConnect 23.9
104 TestFunctional/parallel/AddonsCmd 0.18
105 TestFunctional/parallel/PersistentVolumeClaim 47.68
107 TestFunctional/parallel/SSHCmd 0.51
108 TestFunctional/parallel/CpCmd 1.58
109 TestFunctional/parallel/MySQL 28.34
110 TestFunctional/parallel/FileSync 0.28
111 TestFunctional/parallel/CertSync 1.78
115 TestFunctional/parallel/NodeLabels 0.1
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.52
119 TestFunctional/parallel/License 0.22
120 TestFunctional/parallel/Version/short 0.06
121 TestFunctional/parallel/Version/components 0.91
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.34
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.34
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.4
126 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
127 TestFunctional/parallel/ImageCommands/Setup 0.93
128 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
129 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
130 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
133 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
143 TestFunctional/parallel/ProfileCmd/profile_list 0.32
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.85
150 TestFunctional/parallel/ServiceCmd/DeployApp 20.58
151 TestFunctional/parallel/ServiceCmd/List 0.52
152 TestFunctional/parallel/ServiceCmd/JSONOutput 0.55
153 TestFunctional/parallel/MountCmd/any-port 8.23
154 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
155 TestFunctional/parallel/ServiceCmd/Format 0.34
156 TestFunctional/parallel/ServiceCmd/URL 0.39
157 TestFunctional/parallel/MountCmd/specific-port 2.09
158 TestFunctional/parallel/MountCmd/VerifyCleanup 1.81
159 TestFunctional/delete_addon-resizer_images 0.06
160 TestFunctional/delete_my-image_image 0.02
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestIngressAddonLegacy/StartLegacyK8sCluster 75.67
167 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 12.97
168 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.6
172 TestJSONOutput/start/Command 67.64
173 TestJSONOutput/start/Audit 0
175 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
176 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
178 TestJSONOutput/pause/Command 0.68
179 TestJSONOutput/pause/Audit 0
181 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/unpause/Command 0.63
185 TestJSONOutput/unpause/Audit 0
187 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/stop/Command 7.11
191 TestJSONOutput/stop/Audit 0
193 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
195 TestErrorJSONOutput 0.22
200 TestMainNoArgs 0.06
201 TestMinikubeProfile 97.67
204 TestMountStart/serial/StartWithMountFirst 27.12
205 TestMountStart/serial/VerifyMountFirst 0.41
206 TestMountStart/serial/StartWithMountSecond 26.28
207 TestMountStart/serial/VerifyMountSecond 0.41
208 TestMountStart/serial/DeleteFirst 0.69
209 TestMountStart/serial/VerifyMountPostDelete 0.41
210 TestMountStart/serial/Stop 1.23
211 TestMountStart/serial/RestartStopped 25.1
212 TestMountStart/serial/VerifyMountPostStop 0.42
215 TestMultiNode/serial/FreshStart2Nodes 124.21
216 TestMultiNode/serial/DeployApp2Nodes 5.77
217 TestMultiNode/serial/PingHostFrom2Pods 0.95
218 TestMultiNode/serial/AddNode 40.3
219 TestMultiNode/serial/MultiNodeLabels 0.06
220 TestMultiNode/serial/ProfileList 0.22
221 TestMultiNode/serial/CopyFile 7.7
222 TestMultiNode/serial/StopNode 2.97
223 TestMultiNode/serial/StartAfterStop 30.17
225 TestMultiNode/serial/DeleteNode 1.76
227 TestMultiNode/serial/RestartMultiNode 440.26
228 TestMultiNode/serial/ValidateNameConflict 47.46
235 TestScheduledStopUnix 121.84
239 TestRunningBinaryUpgrade 236.4
241 TestKubernetesUpgrade 226.6
244 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
245 TestNoKubernetes/serial/StartWithK8s 102.75
254 TestPause/serial/Start 145.2
255 TestNoKubernetes/serial/StartWithStopK8s 71.24
256 TestNoKubernetes/serial/Start 27.57
257 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
258 TestNoKubernetes/serial/ProfileList 14.14
259 TestPause/serial/SecondStartNoReconfiguration 36.08
260 TestNoKubernetes/serial/Stop 1.24
261 TestNoKubernetes/serial/StartNoArgs 28.18
262 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.23
263 TestPause/serial/Pause 0.86
264 TestPause/serial/VerifyStatus 0.28
265 TestPause/serial/Unpause 0.71
266 TestPause/serial/PauseAgain 0.93
267 TestPause/serial/DeletePaused 1.01
268 TestPause/serial/VerifyDeletedResources 0.24
276 TestNetworkPlugins/group/false 3.63
280 TestStoppedBinaryUpgrade/Setup 0.57
281 TestStoppedBinaryUpgrade/Upgrade 153.66
283 TestStartStop/group/old-k8s-version/serial/FirstStart 178.43
284 TestStoppedBinaryUpgrade/MinikubeLogs 0.91
286 TestStartStop/group/no-preload/serial/FirstStart 150.47
288 TestStartStop/group/embed-certs/serial/FirstStart 133.56
290 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 125.94
291 TestStartStop/group/old-k8s-version/serial/DeployApp 8.42
292 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
294 TestStartStop/group/no-preload/serial/DeployApp 9.31
295 TestStartStop/group/embed-certs/serial/DeployApp 9.34
296 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.05
297 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.17
300 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
301 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
304 TestStartStop/group/old-k8s-version/serial/SecondStart 408.08
307 TestStartStop/group/no-preload/serial/SecondStart 619.41
308 TestStartStop/group/embed-certs/serial/SecondStart 895.03
310 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 901.97
319 TestStartStop/group/newest-cni/serial/FirstStart 58.81
321 TestNetworkPlugins/group/auto/Start 105.64
322 TestStartStop/group/newest-cni/serial/DeployApp 0
323 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.14
324 TestStartStop/group/newest-cni/serial/Stop 3.14
325 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
326 TestStartStop/group/newest-cni/serial/SecondStart 69.06
327 TestNetworkPlugins/group/kindnet/Start 107.92
328 TestNetworkPlugins/group/calico/Start 128.93
329 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
330 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
331 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
332 TestStartStop/group/newest-cni/serial/Pause 3.7
333 TestNetworkPlugins/group/custom-flannel/Start 107.44
334 TestNetworkPlugins/group/auto/KubeletFlags 0.28
335 TestNetworkPlugins/group/auto/NetCatPod 15.32
336 TestNetworkPlugins/group/auto/DNS 0.23
337 TestNetworkPlugins/group/auto/Localhost 0.2
338 TestNetworkPlugins/group/auto/HairPin 0.16
339 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
340 TestNetworkPlugins/group/kindnet/KubeletFlags 0.23
341 TestNetworkPlugins/group/kindnet/NetCatPod 13.26
342 TestNetworkPlugins/group/enable-default-cni/Start 78.92
343 TestNetworkPlugins/group/kindnet/DNS 0.19
344 TestNetworkPlugins/group/kindnet/Localhost 0.17
345 TestNetworkPlugins/group/kindnet/HairPin 0.18
346 TestNetworkPlugins/group/flannel/Start 104.67
347 TestNetworkPlugins/group/calico/ControllerPod 6.01
348 TestNetworkPlugins/group/calico/KubeletFlags 0.26
349 TestNetworkPlugins/group/calico/NetCatPod 15.28
350 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.24
351 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.28
352 TestNetworkPlugins/group/calico/DNS 0.23
353 TestNetworkPlugins/group/calico/Localhost 0.19
354 TestNetworkPlugins/group/calico/HairPin 0.18
355 TestNetworkPlugins/group/custom-flannel/DNS 0.25
356 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
357 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
358 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
359 TestNetworkPlugins/group/enable-default-cni/NetCatPod 13.48
360 TestNetworkPlugins/group/bridge/Start 93.46
361 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
362 TestNetworkPlugins/group/enable-default-cni/Localhost 0.18
363 TestNetworkPlugins/group/enable-default-cni/HairPin 0.14
364 TestNetworkPlugins/group/flannel/ControllerPod 6.01
365 TestNetworkPlugins/group/flannel/KubeletFlags 0.23
366 TestNetworkPlugins/group/flannel/NetCatPod 13.25
367 TestNetworkPlugins/group/flannel/DNS 0.16
368 TestNetworkPlugins/group/flannel/Localhost 0.14
369 TestNetworkPlugins/group/flannel/HairPin 0.13
370 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
371 TestNetworkPlugins/group/bridge/NetCatPod 10.24
372 TestNetworkPlugins/group/bridge/DNS 0.17
373 TestNetworkPlugins/group/bridge/Localhost 0.14
374 TestNetworkPlugins/group/bridge/HairPin 0.15
x
+
TestDownloadOnly/v1.16.0/json-events (8.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-659842 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-659842 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.360353356s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (8.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-659842
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-659842: exit status 85 (83.913126ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |          |
	|         | -p download-only-659842        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:00:28
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:00:28.709354  647730 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:00:28.709577  647730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:28.709590  647730 out.go:309] Setting ErrFile to fd 2...
	I0130 21:00:28.709594  647730 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:28.709819  647730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	W0130 21:00:28.709963  647730 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18014-640473/.minikube/config/config.json: open /home/jenkins/minikube-integration/18014-640473/.minikube/config/config.json: no such file or directory
	I0130 21:00:28.710590  647730 out.go:303] Setting JSON to true
	I0130 21:00:28.711752  647730 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6181,"bootTime":1706642248,"procs":179,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:00:28.711826  647730 start.go:138] virtualization: kvm guest
	I0130 21:00:28.714438  647730 out.go:97] [download-only-659842] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:00:28.716078  647730 out.go:169] MINIKUBE_LOCATION=18014
	W0130 21:00:28.714577  647730 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball: no such file or directory
	I0130 21:00:28.714663  647730 notify.go:220] Checking for updates...
	I0130 21:00:28.719068  647730 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:00:28.720735  647730 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:00:28.722253  647730 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:28.723758  647730 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 21:00:28.726576  647730 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 21:00:28.726878  647730 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:00:28.761160  647730 out.go:97] Using the kvm2 driver based on user configuration
	I0130 21:00:28.761190  647730 start.go:298] selected driver: kvm2
	I0130 21:00:28.761196  647730 start.go:902] validating driver "kvm2" against <nil>
	I0130 21:00:28.761564  647730 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:28.761683  647730 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 21:00:28.777125  647730 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 21:00:28.777192  647730 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 21:00:28.777737  647730 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 21:00:28.777884  647730 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 21:00:28.777967  647730 cni.go:84] Creating CNI manager for ""
	I0130 21:00:28.777981  647730 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:00:28.777993  647730 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 21:00:28.777999  647730 start_flags.go:321] config:
	{Name:download-only-659842 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-659842 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:00:28.778207  647730 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:28.780520  647730 out.go:97] Downloading VM boot image ...
	I0130 21:00:28.780587  647730 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18014-640473/.minikube/cache/iso/amd64/minikube-v1.32.1-1703784139-17866-amd64.iso
	I0130 21:00:31.659931  647730 out.go:97] Starting control plane node download-only-659842 in cluster download-only-659842
	I0130 21:00:31.659964  647730 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 21:00:31.698121  647730 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0130 21:00:31.698168  647730 cache.go:56] Caching tarball of preloaded images
	I0130 21:00:31.698359  647730 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0130 21:00:31.700353  647730 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0130 21:00:31.700377  647730 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:00:31.737259  647730 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-659842"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.16.0/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-659842
--- PASS: TestDownloadOnly/v1.16.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-216359 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-216359 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.330237964s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-216359
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-216359: exit status 85 (83.441046ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | -p download-only-659842        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-659842        | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| start   | -o=json --download-only        | download-only-216359 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | -p download-only-216359        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:00:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:00:37.461005  647884 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:00:37.461285  647884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:37.461295  647884 out.go:309] Setting ErrFile to fd 2...
	I0130 21:00:37.461300  647884 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:37.461493  647884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:00:37.462080  647884 out.go:303] Setting JSON to true
	I0130 21:00:37.463095  647884 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6190,"bootTime":1706642248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:00:37.463166  647884 start.go:138] virtualization: kvm guest
	I0130 21:00:37.467156  647884 out.go:97] [download-only-216359] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:00:37.467362  647884 notify.go:220] Checking for updates...
	I0130 21:00:37.469160  647884 out.go:169] MINIKUBE_LOCATION=18014
	I0130 21:00:37.470983  647884 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:00:37.472658  647884 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:00:37.474145  647884 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:37.475643  647884 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0130 21:00:37.478868  647884 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0130 21:00:37.479137  647884 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:00:37.512287  647884 out.go:97] Using the kvm2 driver based on user configuration
	I0130 21:00:37.512323  647884 start.go:298] selected driver: kvm2
	I0130 21:00:37.512331  647884 start.go:902] validating driver "kvm2" against <nil>
	I0130 21:00:37.512707  647884 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:37.512811  647884 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18014-640473/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0130 21:00:37.528407  647884 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0130 21:00:37.528490  647884 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0130 21:00:37.529080  647884 start_flags.go:392] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0130 21:00:37.529253  647884 start_flags.go:909] Wait components to verify : map[apiserver:true system_pods:true]
	I0130 21:00:37.529344  647884 cni.go:84] Creating CNI manager for ""
	I0130 21:00:37.529360  647884 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0130 21:00:37.529376  647884 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0130 21:00:37.529390  647884 start_flags.go:321] config:
	{Name:download-only-216359 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:download-only-216359 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:00:37.529595  647884 iso.go:125] acquiring lock: {Name:mk169769d1a88bf8f9d20fc233531f0246f9e38f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0130 21:00:37.531900  647884 out.go:97] Starting control plane node download-only-216359 in cluster download-only-216359
	I0130 21:00:37.531924  647884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:00:37.568496  647884 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 21:00:37.568539  647884 cache.go:56] Caching tarball of preloaded images
	I0130 21:00:37.568744  647884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:00:37.570698  647884 out.go:97] Downloading Kubernetes v1.28.4 preload ...
	I0130 21:00:37.570716  647884 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:00:37.623528  647884 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.4/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b0bd7b3b222c094c365d9c9e10e48fc7 -> /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0130 21:00:41.165385  647884 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:00:41.165514  647884 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/18014-640473/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 ...
	I0130 21:00:42.095432  647884 cache.go:59] Finished verifying existence of preloaded tar for  v1.28.4 on crio
	I0130 21:00:42.095866  647884 profile.go:148] Saving config to /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/download-only-216359/config.json ...
	I0130 21:00:42.095904  647884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/download-only-216359/config.json: {Name:mk036ad31dc6c6a43c29d02f97cb2d16cfc276d4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0130 21:00:42.096074  647884 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0130 21:00:42.096203  647884 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/18014-640473/.minikube/cache/linux/amd64/v1.28.4/kubectl
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-216359"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-216359
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (4.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-179689 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-179689 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.470731799s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (4.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-179689
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-179689: exit status 85 (84.000781ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | -p download-only-659842           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-659842           | download-only-659842 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| start   | -o=json --download-only           | download-only-216359 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | -p download-only-216359           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| delete  | -p download-only-216359           | download-only-216359 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC | 30 Jan 24 21:00 UTC |
	| start   | -o=json --download-only           | download-only-179689 | jenkins | v1.32.0 | 30 Jan 24 21:00 UTC |                     |
	|         | -p download-only-179689           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/01/30 21:00:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.21.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0130 21:00:43.178914  648046 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:00:43.179126  648046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:43.179139  648046 out.go:309] Setting ErrFile to fd 2...
	I0130 21:00:43.179147  648046 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:00:43.179380  648046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:00:43.180081  648046 out.go:303] Setting JSON to true
	I0130 21:00:43.181159  648046 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6195,"bootTime":1706642248,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:00:43.181236  648046 start.go:138] virtualization: kvm guest
	I0130 21:00:43.183961  648046 out.go:97] [download-only-179689] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:00:43.185717  648046 out.go:169] MINIKUBE_LOCATION=18014
	I0130 21:00:43.184150  648046 notify.go:220] Checking for updates...
	I0130 21:00:43.188573  648046 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:00:43.190211  648046 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:00:43.191674  648046 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:00:43.193207  648046 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-179689"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-179689
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.6s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-968156 --alsologtostderr --binary-mirror http://127.0.0.1:45037 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-968156" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-968156
--- PASS: TestBinaryMirror (0.60s)

                                                
                                    
x
+
TestOffline (67.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-651870 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-651870 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m6.424831577s)
helpers_test.go:175: Cleaning up "offline-crio-651870" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-651870
--- PASS: TestOffline (67.30s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-444608
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-444608: exit status 85 (80.772792ms)

                                                
                                                
-- stdout --
	* Profile "addons-444608" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444608"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-444608
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-444608: exit status 85 (80.772659ms)

                                                
                                                
-- stdout --
	* Profile "addons-444608" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-444608"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (216.05s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-444608 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-444608 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m36.053867385s)
--- PASS: TestAddons/Setup (216.05s)

                                                
                                    
x
+
TestAddons/parallel/Registry (18.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 34.490184ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-kt65f" [5294b992-54aa-45df-96fb-08f9593167ee] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.030698558s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-w5n4t" [dac229c0-d6b4-4672-a1b0-fd5785554894] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.017575211s
addons_test.go:340: (dbg) Run:  kubectl --context addons-444608 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-444608 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-444608 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.149175495s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 ip
2024/01/30 21:04:42 [DEBUG] GET http://192.168.39.85:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (18.44s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wx8c9" [d77b8ffd-8857-4b6b-b94d-76353eeab466] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00554123s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444608
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-444608: (5.962368465s)
--- PASS: TestAddons/parallel/InspektorGadget (10.97s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 3.523268ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-hjdhk" [0fffac63-471a-4ee4-bb41-90cc5a14c096] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.075567269s
addons_test.go:415: (dbg) Run:  kubectl --context addons-444608 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable metrics-server --alsologtostderr -v=1: (1.345613255s)
--- PASS: TestAddons/parallel/MetricsServer (6.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (13.34s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.006956ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-6kl6h" [154851b3-bb81-4418-85bc-bd5eaa9f28b6] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 6.027054953s
addons_test.go:473: (dbg) Run:  kubectl --context addons-444608 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-444608 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (6.64265454s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (13.34s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.91s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 35.161895ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-444608 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-444608 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [4e46d7d6-a15c-48b8-aec9-c3744994bb0f] Pending
helpers_test.go:344: "task-pv-pod" [4e46d7d6-a15c-48b8-aec9-c3744994bb0f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [4e46d7d6-a15c-48b8-aec9-c3744994bb0f] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.009786439s
addons_test.go:584: (dbg) Run:  kubectl --context addons-444608 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444608 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-444608 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-444608 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-444608 delete pod task-pv-pod: (1.496804843s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-444608 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-444608 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-444608 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6ada5e14-d4d7-47af-89be-1b7832936b3c] Pending
helpers_test.go:344: "task-pv-pod-restore" [6ada5e14-d4d7-47af-89be-1b7832936b3c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6ada5e14-d4d7-47af-89be-1b7832936b3c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.030474736s
addons_test.go:626: (dbg) Run:  kubectl --context addons-444608 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-444608 delete pod task-pv-pod-restore: (1.970314934s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-444608 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-444608 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.961209668s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable volumesnapshots --alsologtostderr -v=1: (1.000811378s)
--- PASS: TestAddons/parallel/CSI (46.91s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.2s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-444608 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-444608 --alsologtostderr -v=1: (3.196913969s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7ddfbb94ff-2hbhw" [f59807c3-31fe-4692-8bb3-ff395c694341] Pending
helpers_test.go:344: "headlamp-7ddfbb94ff-2hbhw" [f59807c3-31fe-4692-8bb3-ff395c694341] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7ddfbb94ff-2hbhw" [f59807c3-31fe-4692-8bb3-ff395c694341] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 13.004905427s
--- PASS: TestAddons/parallel/Headlamp (16.20s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-64c8c85f65-2jqt5" [3ac97517-90e7-47f5-b9d5-64733adf3a8a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003726574s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-444608
--- PASS: TestAddons/parallel/CloudSpanner (6.63s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (60.96s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-444608 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-444608 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [107c70f5-a375-4d06-b5f3-384f3b52942c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [107c70f5-a375-4d06-b5f3-384f3b52942c] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [107c70f5-a375-4d06-b5f3-384f3b52942c] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 10.006092393s
addons_test.go:891: (dbg) Run:  kubectl --context addons-444608 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 ssh "cat /opt/local-path-provisioner/pvc-9b1e24f6-2f64-488b-b79c-cb8ec398703e_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-444608 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-444608 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-444608 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-444608 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (44.00869649s)
--- PASS: TestAddons/parallel/LocalPath (60.96s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-z6z8l" [6f033fa5-926c-4d73-b45a-1566a992e73d] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006601905s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-444608
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.61s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-zb455" [2c78c5d5-0c99-43ad-9734-3555e64782bf] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004247847s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-444608 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-444608 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (83.11s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-772741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
E0130 22:01:52.587718  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-772741 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m21.045065919s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-772741 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-772741 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-772741 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-772741" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-772741
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-772741: (1.532279453s)
--- PASS: TestCertOptions (83.11s)

                                                
                                    
x
+
TestCertExpiration (365.23s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-822826 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-822826 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m51.586990882s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-822826 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-822826 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (1m12.014901949s)
helpers_test.go:175: Cleaning up "cert-expiration-822826" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-822826
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-822826: (1.631121966s)
--- PASS: TestCertExpiration (365.23s)

                                                
                                    
x
+
TestForceSystemdFlag (55.06s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-509303 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-509303 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.827582863s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-509303 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-509303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-509303
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-509303: (1.018383305s)
--- PASS: TestForceSystemdFlag (55.06s)

                                                
                                    
x
+
TestForceSystemdEnv (75.97s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-734604 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-734604 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m14.962738279s)
helpers_test.go:175: Cleaning up "force-systemd-env-734604" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-734604
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-734604: (1.01103335s)
--- PASS: TestForceSystemdEnv (75.97s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (3.36s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (3.36s)

                                                
                                    
x
+
TestErrorSpam/setup (46.84s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-582762 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-582762 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-582762 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-582762 --driver=kvm2  --container-runtime=crio: (46.839481774s)
--- PASS: TestErrorSpam/setup (46.84s)

                                                
                                    
x
+
TestErrorSpam/start (0.42s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 start --dry-run
--- PASS: TestErrorSpam/start (0.42s)

                                                
                                    
x
+
TestErrorSpam/status (0.85s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 status
--- PASS: TestErrorSpam/status (0.85s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.81s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 unpause
--- PASS: TestErrorSpam/unpause (1.81s)

                                                
                                    
x
+
TestErrorSpam/stop (2.3s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 stop: (2.110730967s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-582762 --log_dir /tmp/nospam-582762 stop
--- PASS: TestErrorSpam/stop (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1854: local sync path: /home/jenkins/minikube-integration/18014-640473/.minikube/files/etc/test/nested/copy/647718/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (99.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2233: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2233: (dbg) Done: out/minikube-linux-amd64 start -p functional-500919 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m39.884772253s)
--- PASS: TestFunctional/serial/StartWithProxy (99.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (37.72s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-500919 --alsologtostderr -v=8: (37.714713723s)
functional_test.go:659: soft start took 37.715549003s for "functional-500919" cluster.
--- PASS: TestFunctional/serial/SoftStart (37.72s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-500919 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:3.1: (1.027925996s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:3.3: (1.235184443s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 cache add registry.k8s.io/pause:latest: (1.113993284s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (243.970794ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 kubectl -- --context functional-500919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-500919 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (38.47s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-500919 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (38.469580997s)
functional_test.go:757: restart took 38.469748292s for "functional-500919" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (38.47s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-500919 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 logs
E0130 21:14:25.157796  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.163800  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.174141  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.195181  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.235546  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.316544  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.477263  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:25.798125  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 logs: (1.571666635s)
--- PASS: TestFunctional/serial/LogsCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 logs --file /tmp/TestFunctionalserialLogsFileCmd682828269/001/logs.txt
E0130 21:14:26.439262  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 logs --file /tmp/TestFunctionalserialLogsFileCmd682828269/001/logs.txt: (1.566512502s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.57s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2320: (dbg) Run:  kubectl --context functional-500919 apply -f testdata/invalidsvc.yaml
E0130 21:14:27.719752  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:14:30.280907  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
functional_test.go:2334: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-500919
functional_test.go:2334: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-500919: exit status 115 (308.237823ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.114:30919 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2326: (dbg) Run:  kubectl --context functional-500919 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.38s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 config get cpus: exit status 14 (74.994868ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 config get cpus: exit status 14 (61.462147ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (17.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-500919 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-500919 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 655696: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (17.16s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-500919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (169.658633ms)

                                                
                                                
-- stdout --
	* [functional-500919] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:15:01.077456  655355 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:15:01.077605  655355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:15:01.077617  655355 out.go:309] Setting ErrFile to fd 2...
	I0130 21:15:01.077622  655355 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:15:01.077822  655355 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:15:01.078483  655355 out.go:303] Setting JSON to false
	I0130 21:15:01.079623  655355 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7053,"bootTime":1706642248,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:15:01.079713  655355 start.go:138] virtualization: kvm guest
	I0130 21:15:01.082190  655355 out.go:177] * [functional-500919] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 21:15:01.083826  655355 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 21:15:01.083820  655355 notify.go:220] Checking for updates...
	I0130 21:15:01.085238  655355 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:15:01.086697  655355 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:15:01.088050  655355 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:15:01.089408  655355 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 21:15:01.091010  655355 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 21:15:01.092997  655355 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:15:01.093650  655355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:15:01.093715  655355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:15:01.110211  655355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46317
	I0130 21:15:01.110687  655355 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:15:01.111265  655355 main.go:141] libmachine: Using API Version  1
	I0130 21:15:01.111291  655355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:15:01.111640  655355 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:15:01.111839  655355 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:15:01.112131  655355 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:15:01.112483  655355 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:15:01.112529  655355 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:15:01.128319  655355 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40477
	I0130 21:15:01.128767  655355 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:15:01.129272  655355 main.go:141] libmachine: Using API Version  1
	I0130 21:15:01.129301  655355 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:15:01.129778  655355 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:15:01.130001  655355 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:15:01.170830  655355 out.go:177] * Using the kvm2 driver based on existing profile
	I0130 21:15:01.172437  655355 start.go:298] selected driver: kvm2
	I0130 21:15:01.172455  655355 start.go:902] validating driver "kvm2" against &{Name:functional-500919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-500919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.114 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:15:01.172673  655355 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 21:15:01.175230  655355 out.go:177] 
	W0130 21:15:01.176668  655355 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0130 21:15:01.178051  655355 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-500919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-500919 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (157.575456ms)

                                                
                                                
-- stdout --
	* [functional-500919] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:14:59.290654  655024 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:14:59.290948  655024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:59.290957  655024 out.go:309] Setting ErrFile to fd 2...
	I0130 21:14:59.290962  655024 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:14:59.291274  655024 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:14:59.291845  655024 out.go:303] Setting JSON to false
	I0130 21:14:59.292845  655024 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7051,"bootTime":1706642248,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 21:14:59.292914  655024 start.go:138] virtualization: kvm guest
	I0130 21:14:59.294953  655024 out.go:177] * [functional-500919] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0130 21:14:59.296645  655024 notify.go:220] Checking for updates...
	I0130 21:14:59.296651  655024 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 21:14:59.298145  655024 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 21:14:59.299621  655024 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 21:14:59.301023  655024 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 21:14:59.302445  655024 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 21:14:59.303883  655024 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 21:14:59.305751  655024 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:14:59.306187  655024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:59.306274  655024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:59.321810  655024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41675
	I0130 21:14:59.322205  655024 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:59.322738  655024 main.go:141] libmachine: Using API Version  1
	I0130 21:14:59.322763  655024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:59.323485  655024 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:59.323786  655024 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:14:59.324806  655024 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 21:14:59.325348  655024 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:14:59.325418  655024 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:14:59.340263  655024 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34585
	I0130 21:14:59.340730  655024 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:14:59.341278  655024 main.go:141] libmachine: Using API Version  1
	I0130 21:14:59.341303  655024 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:14:59.341731  655024 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:14:59.341964  655024 main.go:141] libmachine: (functional-500919) Calling .DriverName
	I0130 21:14:59.380185  655024 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0130 21:14:59.381148  655024 start.go:298] selected driver: kvm2
	I0130 21:14:59.381166  655024 start.go:902] validating driver "kvm2" against &{Name:functional-500919 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17866/minikube-v1.32.1-1703784139-17866-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1704759386-17866@sha256:8c3c33047f9bc285e1f5f2a5aa14744a2fe04c58478f02f77b06169dea8dd3f0 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.4 ClusterName:functional-500919 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.50.114 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:}
	I0130 21:14:59.381285  655024 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 21:14:59.383577  655024 out.go:177] 
	W0130 21:14:59.384748  655024 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0130 21:14:59.386028  655024 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.9s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-500919 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
E0130 21:14:35.401348  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
functional_test.go:1634: (dbg) Run:  kubectl --context functional-500919 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-74hgj" [87073e8d-671f-4934-803c-072dd2c235fb] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-74hgj" [87073e8d-671f-4934-803c-072dd2c235fb] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.243688196s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.50.114:30696
functional_test.go:1674: http://192.168.50.114:30696: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-74hgj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.114:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.114:30696
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.90s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (47.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8084e916-4898-4e77-ae14-d549d6be8d71] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005093392s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-500919 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-500919 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-500919 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-500919 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-500919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [cfc48f10-c41f-4120-8bef-9e96bf2a50ac] Pending
helpers_test.go:344: "sp-pod" [cfc48f10-c41f-4120-8bef-9e96bf2a50ac] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0130 21:14:45.642424  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [cfc48f10-c41f-4120-8bef-9e96bf2a50ac] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 25.007381896s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-500919 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-500919 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-500919 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [1c4007a6-09c9-4096-aedb-5d0dac83d4f3] Pending
helpers_test.go:344: "sp-pod" [1c4007a6-09c9-4096-aedb-5d0dac83d4f3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [1c4007a6-09c9-4096-aedb-5d0dac83d4f3] Running
2024/01/30 21:15:18 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.00504572s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-500919 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (47.68s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh -n functional-500919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cp functional-500919:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd927991897/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh -n functional-500919 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh -n functional-500919 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (28.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: (dbg) Run:  kubectl --context functional-500919 replace --force -f testdata/mysql.yaml
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-bx54q" [a459f3ce-6002-4035-bbbb-5a7c1a3d360f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-bx54q" [a459f3ce-6002-4035-bbbb-5a7c1a3d360f] Running
functional_test.go:1798: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.009349286s
functional_test.go:1806: (dbg) Run:  kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;": exit status 1 (178.267286ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;": exit status 1 (598.763586ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;"
functional_test.go:1806: (dbg) Non-zero exit: kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;": exit status 1 (321.736939ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1806: (dbg) Run:  kubectl --context functional-500919 exec mysql-859648c796-bx54q -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (28.34s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1928: Checking for existence of /etc/test/nested/copy/647718/hosts within VM
functional_test.go:1930: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /etc/test/nested/copy/647718/hosts"
functional_test.go:1935: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1971: Checking for existence of /etc/ssl/certs/647718.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /etc/ssl/certs/647718.pem"
functional_test.go:1971: Checking for existence of /usr/share/ca-certificates/647718.pem within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /usr/share/ca-certificates/647718.pem"
functional_test.go:1971: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1972: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/6477182.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /etc/ssl/certs/6477182.pem"
functional_test.go:1998: Checking for existence of /usr/share/ca-certificates/6477182.pem within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /usr/share/ca-certificates/6477182.pem"
functional_test.go:1998: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1999: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-500919 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo systemctl is-active docker"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "sudo systemctl is-active docker": exit status 1 (256.357323ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2026: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo systemctl is-active containerd"
functional_test.go:2026: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "sudo systemctl is-active containerd": exit status 1 (259.180302ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2287: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2255: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2269: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-500919 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-500919 image ls --format short --alsologtostderr:
I0130 21:15:03.211377  655631 out.go:296] Setting OutFile to fd 1 ...
I0130 21:15:03.211521  655631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:03.211530  655631 out.go:309] Setting ErrFile to fd 2...
I0130 21:15:03.211535  655631 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:03.211751  655631 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
I0130 21:15:03.212395  655631 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:03.212516  655631 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:03.212896  655631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:03.212955  655631 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:03.229291  655631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33425
I0130 21:15:03.229777  655631 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:03.230421  655631 main.go:141] libmachine: Using API Version  1
I0130 21:15:03.230447  655631 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:03.230806  655631 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:03.231065  655631 main.go:141] libmachine: (functional-500919) Calling .GetState
I0130 21:15:03.232796  655631 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:03.232842  655631 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:03.248645  655631 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40117
I0130 21:15:03.249131  655631 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:03.249682  655631 main.go:141] libmachine: Using API Version  1
I0130 21:15:03.249713  655631 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:03.250121  655631 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:03.250347  655631 main.go:141] libmachine: (functional-500919) Calling .DriverName
I0130 21:15:03.250592  655631 ssh_runner.go:195] Run: systemctl --version
I0130 21:15:03.250630  655631 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
I0130 21:15:03.253523  655631 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:03.253961  655631 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
I0130 21:15:03.254019  655631 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:03.254153  655631 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
I0130 21:15:03.254355  655631 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
I0130 21:15:03.254514  655631 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
I0130 21:15:03.254649  655631 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
I0130 21:15:03.401098  655631 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 21:15:03.479219  655631 main.go:141] libmachine: Making call to close driver server
I0130 21:15:03.479236  655631 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:03.479607  655631 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
I0130 21:15:03.479594  655631 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:03.479660  655631 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:03.479678  655631 main.go:141] libmachine: Making call to close driver server
I0130 21:15:03.479691  655631 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:03.479952  655631 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:03.479965  655631 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
I0130 21:15:03.479986  655631 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-500919 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| localhost/my-image                      | functional-500919  | 4f80e04f91c1e | 1.47MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | latest             | a8758716bb6aa | 191MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-500919 image ls --format table --alsologtostderr:
I0130 21:15:07.697744  655918 out.go:296] Setting OutFile to fd 1 ...
I0130 21:15:07.697938  655918 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:07.697952  655918 out.go:309] Setting ErrFile to fd 2...
I0130 21:15:07.697958  655918 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:07.698233  655918 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
I0130 21:15:07.699095  655918 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:07.699343  655918 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:07.699995  655918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:07.700070  655918 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:07.715484  655918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37353
I0130 21:15:07.716092  655918 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:07.716825  655918 main.go:141] libmachine: Using API Version  1
I0130 21:15:07.716858  655918 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:07.717256  655918 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:07.717511  655918 main.go:141] libmachine: (functional-500919) Calling .GetState
I0130 21:15:07.719758  655918 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:07.719814  655918 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:07.740446  655918 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45251
I0130 21:15:07.740851  655918 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:07.741398  655918 main.go:141] libmachine: Using API Version  1
I0130 21:15:07.741416  655918 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:07.741800  655918 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:07.742011  655918 main.go:141] libmachine: (functional-500919) Calling .DriverName
I0130 21:15:07.742226  655918 ssh_runner.go:195] Run: systemctl --version
I0130 21:15:07.742254  655918 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
I0130 21:15:07.745651  655918 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:07.746111  655918 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
I0130 21:15:07.746169  655918 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:07.746350  655918 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
I0130 21:15:07.746547  655918 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
I0130 21:15:07.746713  655918 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
I0130 21:15:07.746844  655918 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
I0130 21:15:07.856684  655918 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 21:15:07.944991  655918 main.go:141] libmachine: Making call to close driver server
I0130 21:15:07.945038  655918 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:07.945377  655918 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:07.945399  655918 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:07.945419  655918 main.go:141] libmachine: Making call to close driver server
I0130 21:15:07.945429  655918 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:07.945903  655918 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
I0130 21:15:07.946026  655918 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:07.946058  655918 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-500919 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7
e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.4"],"size":"127226832"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6","repoDigests":["docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c","docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac"],"repoTags":["docker.io/library/nginx:latest"],"size":"190867606"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/
k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"4f80e04f91c1e0d11517cc9d35e7417e961cd1ed15f4ad2ababf72047c22a99c","repoDigests":["localhost/my-image@sha256:b090fa0e7fd48d93724078ee0027e2c2c31969fd8c728c4f51e25bdd57fa6798"],"repoTags":["localhost/my-image:functional-500919"],"size":"1468600"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"83f6
cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28
ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"9323772b26588600be962b51a09e485c9327963ba56970b6ebc730197ff464c2","repoDigests":["docker.io/library/a9a39c04ae259582ac3938548bab926170c6f71555a811d1847fed700f727c57-tmp@sha256:0f3cda5b6e897b3c3b279955459a9b4f0c60d423b
63814e17e3441f8564d23ec"],"repoTags":[],"size":"1466018"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.i
o/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-500919 image ls --format json --alsologtostderr:
I0130 21:15:07.419144  655895 out.go:296] Setting OutFile to fd 1 ...
I0130 21:15:07.419436  655895 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:07.419448  655895 out.go:309] Setting ErrFile to fd 2...
I0130 21:15:07.419456  655895 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:07.419651  655895 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
I0130 21:15:07.420308  655895 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:07.420454  655895 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:07.420910  655895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:07.420986  655895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:07.435873  655895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37815
I0130 21:15:07.436385  655895 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:07.437038  655895 main.go:141] libmachine: Using API Version  1
I0130 21:15:07.437065  655895 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:07.437405  655895 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:07.437652  655895 main.go:141] libmachine: (functional-500919) Calling .GetState
I0130 21:15:07.439806  655895 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:07.439864  655895 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:07.454974  655895 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43221
I0130 21:15:07.455521  655895 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:07.456068  655895 main.go:141] libmachine: Using API Version  1
I0130 21:15:07.456096  655895 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:07.456436  655895 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:07.456624  655895 main.go:141] libmachine: (functional-500919) Calling .DriverName
I0130 21:15:07.456813  655895 ssh_runner.go:195] Run: systemctl --version
I0130 21:15:07.456855  655895 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
I0130 21:15:07.459649  655895 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:07.460088  655895 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
I0130 21:15:07.460124  655895 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:07.460232  655895 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
I0130 21:15:07.460445  655895 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
I0130 21:15:07.460584  655895 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
I0130 21:15:07.460722  655895 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
I0130 21:15:07.561488  655895 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 21:15:07.606995  655895 main.go:141] libmachine: Making call to close driver server
I0130 21:15:07.607018  655895 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:07.607320  655895 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:07.607382  655895 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:07.607394  655895 main.go:141] libmachine: Making call to close driver server
I0130 21:15:07.607390  655895 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
I0130 21:15:07.607403  655895 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:07.607687  655895 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
I0130 21:15:07.607727  655895 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:07.607741  655895 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-500919 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: a8758716bb6aa4d90071160d27028fe4eaee7ce8166221a97d30440c8eac2be6
repoDigests:
- docker.io/library/nginx@sha256:161ef4b1bf7effb350a2a9625cb2b59f69d54ec6059a8a155a1438d0439c593c
- docker.io/library/nginx@sha256:4c0fdaa8b6341bfdeca5f18f7837462c80cff90527ee35ef185571e1c327beac
repoTags:
- docker.io/library/nginx:latest
size: "190867606"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-500919 image ls --format yaml --alsologtostderr:
I0130 21:15:03.550853  655654 out.go:296] Setting OutFile to fd 1 ...
I0130 21:15:03.551046  655654 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:03.551062  655654 out.go:309] Setting ErrFile to fd 2...
I0130 21:15:03.551071  655654 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:03.551333  655654 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
I0130 21:15:03.552038  655654 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:03.552169  655654 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:03.552583  655654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:03.552661  655654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:03.568025  655654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42513
I0130 21:15:03.568677  655654 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:03.569289  655654 main.go:141] libmachine: Using API Version  1
I0130 21:15:03.569315  655654 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:03.569693  655654 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:03.569928  655654 main.go:141] libmachine: (functional-500919) Calling .GetState
I0130 21:15:03.571834  655654 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:03.571874  655654 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:03.586841  655654 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35531
I0130 21:15:03.587382  655654 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:03.587929  655654 main.go:141] libmachine: Using API Version  1
I0130 21:15:03.587955  655654 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:03.588334  655654 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:03.588537  655654 main.go:141] libmachine: (functional-500919) Calling .DriverName
I0130 21:15:03.588731  655654 ssh_runner.go:195] Run: systemctl --version
I0130 21:15:03.588760  655654 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
I0130 21:15:03.592157  655654 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:03.592696  655654 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
I0130 21:15:03.592725  655654 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:03.592931  655654 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
I0130 21:15:03.593189  655654 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
I0130 21:15:03.593345  655654 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
I0130 21:15:03.593494  655654 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
I0130 21:15:03.759492  655654 ssh_runner.go:195] Run: sudo crictl images --output json
I0130 21:15:03.877164  655654 main.go:141] libmachine: Making call to close driver server
I0130 21:15:03.877178  655654 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:03.877545  655654 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:03.877572  655654 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:03.877599  655654 main.go:141] libmachine: Making call to close driver server
I0130 21:15:03.877618  655654 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:03.877888  655654 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:03.877906  655654 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:03.877937  655654 main.go:141] libmachine: (functional-500919) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh pgrep buildkitd: exit status 1 (251.087366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image build -t localhost/my-image:functional-500919 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-500919 image build -t localhost/my-image:functional-500919 testdata/build --alsologtostderr: (2.856093475s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-500919 image build -t localhost/my-image:functional-500919 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 9323772b265
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-500919
--> 4f80e04f91c
Successfully tagged localhost/my-image:functional-500919
4f80e04f91c1e0d11517cc9d35e7417e961cd1ed15f4ad2ababf72047c22a99c
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-500919 image build -t localhost/my-image:functional-500919 testdata/build --alsologtostderr:
I0130 21:15:04.194472  655728 out.go:296] Setting OutFile to fd 1 ...
I0130 21:15:04.194689  655728 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:04.194702  655728 out.go:309] Setting ErrFile to fd 2...
I0130 21:15:04.194706  655728 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0130 21:15:04.194920  655728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
I0130 21:15:04.195604  655728 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:04.196336  655728 config.go:182] Loaded profile config "functional-500919": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0130 21:15:04.196769  655728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:04.196815  655728 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:04.212087  655728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34785
I0130 21:15:04.212578  655728 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:04.213283  655728 main.go:141] libmachine: Using API Version  1
I0130 21:15:04.213322  655728 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:04.213665  655728 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:04.213854  655728 main.go:141] libmachine: (functional-500919) Calling .GetState
I0130 21:15:04.215722  655728 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0130 21:15:04.215777  655728 main.go:141] libmachine: Launching plugin server for driver kvm2
I0130 21:15:04.231285  655728 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36139
I0130 21:15:04.231750  655728 main.go:141] libmachine: () Calling .GetVersion
I0130 21:15:04.232261  655728 main.go:141] libmachine: Using API Version  1
I0130 21:15:04.232286  655728 main.go:141] libmachine: () Calling .SetConfigRaw
I0130 21:15:04.232617  655728 main.go:141] libmachine: () Calling .GetMachineName
I0130 21:15:04.232810  655728 main.go:141] libmachine: (functional-500919) Calling .DriverName
I0130 21:15:04.233080  655728 ssh_runner.go:195] Run: systemctl --version
I0130 21:15:04.233120  655728 main.go:141] libmachine: (functional-500919) Calling .GetSSHHostname
I0130 21:15:04.235830  655728 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:04.236260  655728 main.go:141] libmachine: (functional-500919) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:db:80:b1", ip: ""} in network mk-functional-500919: {Iface:virbr1 ExpiryTime:2024-01-30 22:11:35 +0000 UTC Type:0 Mac:52:54:00:db:80:b1 Iaid: IPaddr:192.168.50.114 Prefix:24 Hostname:functional-500919 Clientid:01:52:54:00:db:80:b1}
I0130 21:15:04.236292  655728 main.go:141] libmachine: (functional-500919) DBG | domain functional-500919 has defined IP address 192.168.50.114 and MAC address 52:54:00:db:80:b1 in network mk-functional-500919
I0130 21:15:04.236449  655728 main.go:141] libmachine: (functional-500919) Calling .GetSSHPort
I0130 21:15:04.236624  655728 main.go:141] libmachine: (functional-500919) Calling .GetSSHKeyPath
I0130 21:15:04.236787  655728 main.go:141] libmachine: (functional-500919) Calling .GetSSHUsername
I0130 21:15:04.236935  655728 sshutil.go:53] new ssh client: &{IP:192.168.50.114 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/functional-500919/id_rsa Username:docker}
I0130 21:15:04.320435  655728 build_images.go:151] Building image from path: /tmp/build.2447982979.tar
I0130 21:15:04.320530  655728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0130 21:15:04.335725  655728 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2447982979.tar
I0130 21:15:04.340555  655728 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2447982979.tar: stat -c "%s %y" /var/lib/minikube/build/build.2447982979.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2447982979.tar': No such file or directory
I0130 21:15:04.340593  655728 ssh_runner.go:362] scp /tmp/build.2447982979.tar --> /var/lib/minikube/build/build.2447982979.tar (3072 bytes)
I0130 21:15:04.369826  655728 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2447982979
I0130 21:15:04.379385  655728 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2447982979 -xf /var/lib/minikube/build/build.2447982979.tar
I0130 21:15:04.388680  655728 crio.go:297] Building image: /var/lib/minikube/build/build.2447982979
I0130 21:15:04.388762  655728 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-500919 /var/lib/minikube/build/build.2447982979 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0130 21:15:06.952013  655728 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-500919 /var/lib/minikube/build/build.2447982979 --cgroup-manager=cgroupfs: (2.563224229s)
I0130 21:15:06.952111  655728 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2447982979
I0130 21:15:06.963360  655728 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2447982979.tar
I0130 21:15:06.981246  655728 build_images.go:207] Built localhost/my-image:functional-500919 from /tmp/build.2447982979.tar
I0130 21:15:06.981279  655728 build_images.go:123] succeeded building to: functional-500919
I0130 21:15:06.981283  655728 build_images.go:124] failed building to: 
I0130 21:15:06.981340  655728 main.go:141] libmachine: Making call to close driver server
I0130 21:15:06.981361  655728 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:06.981674  655728 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:06.981684  655728 main.go:141] libmachine: Making call to close connection to plugin binary
I0130 21:15:06.981702  655728 main.go:141] libmachine: Making call to close driver server
I0130 21:15:06.981714  655728 main.go:141] libmachine: (functional-500919) Calling .Close
I0130 21:15:06.981957  655728 main.go:141] libmachine: Successfully made call to close driver server
I0130 21:15:06.981975  655728 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-500919
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2118: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "256.207399ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "65.110981ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "236.69479ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "73.148898ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image rm gcr.io/google-containers/addon-resizer:functional-500919 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-500919 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-500919 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-zzksx" [feacb670-dba9-447a-a0bb-fa2ae7665971] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-zzksx" [feacb670-dba9-447a-a0bb-fa2ae7665971] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.358935855s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.58s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service list -o json
functional_test.go:1493: Took "551.771524ms" to run "out/minikube-linux-amd64 -p functional-500919 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdany-port605265841/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1706649299392970486" to /tmp/TestFunctionalparallelMountCmdany-port605265841/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1706649299392970486" to /tmp/TestFunctionalparallelMountCmdany-port605265841/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1706649299392970486" to /tmp/TestFunctionalparallelMountCmdany-port605265841/001/test-1706649299392970486
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (295.72955ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jan 30 21:14 created-by-test
-rw-r--r-- 1 docker docker 24 Jan 30 21:14 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jan 30 21:14 test-1706649299392970486
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh cat /mount-9p/test-1706649299392970486
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-500919 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7e47f431-583d-4877-909e-f0cc64b5769f] Pending
helpers_test.go:344: "busybox-mount" [7e47f431-583d-4877-909e-f0cc64b5769f] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7e47f431-583d-4877-909e-f0cc64b5769f] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
E0130 21:15:06.122804  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [7e47f431-583d-4877-909e-f0cc64b5769f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005628851s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-500919 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdany-port605265841/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.50.114:32413
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.50.114:32413
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdspecific-port3817873563/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (277.557867ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdspecific-port3817873563/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "sudo umount -f /mount-9p": exit status 1 (238.291221ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-500919 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdspecific-port3817873563/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T" /mount1: exit status 1 (332.695075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-500919 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-500919 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-500919 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3739618278/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.81s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-500919
--- PASS: TestFunctional/delete_addon-resizer_images (0.06s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-500919
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-500919
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (75.67s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-298651 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0130 21:15:47.083211  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-298651 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m15.669272183s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (75.67s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.97s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons enable ingress --alsologtostderr -v=5: (12.969635029s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (12.97s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.6s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-298651 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.60s)

                                                
                                    
x
+
TestJSONOutput/start/Command (67.64s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-192798 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0130 21:20:13.679183  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:20:54.641059  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-192798 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m7.640575383s)
--- PASS: TestJSONOutput/start/Command (67.64s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-192798 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-192798 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-192798 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-192798 --output=json --user=testUser: (7.108624614s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-073576 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-073576 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (79.956716ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"52adfe4c-6747-4e6c-9fee-121fcb86b215","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-073576] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7dbf675-b966-46af-b501-7905a4489f2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18014"}}
	{"specversion":"1.0","id":"cebc74d0-c0c4-406c-9f50-eb21edf7064c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"559e90db-5075-472a-87f8-aab80c5c8446","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig"}}
	{"specversion":"1.0","id":"3759f9f7-cbe4-4905-a555-2e52a3f28c1f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube"}}
	{"specversion":"1.0","id":"cf65383b-b6b9-44ce-a3e1-7c66620f2add","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"790fe105-ce9d-413e-884d-752d393f57b8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5ce25bff-ba7f-4ca2-bae6-56fede2d283a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-073576" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-073576
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-797369 --driver=kvm2  --container-runtime=crio
E0130 21:21:52.587389  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.592713  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.603034  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.623376  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.663763  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.744238  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:52.904712  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:53.225301  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:53.865986  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:21:55.146684  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-797369 --driver=kvm2  --container-runtime=crio: (44.829219869s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-799568 --driver=kvm2  --container-runtime=crio
E0130 21:21:57.707129  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:22:02.827848  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:22:13.068564  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:22:16.561617  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:22:33.548798  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-799568 --driver=kvm2  --container-runtime=crio: (50.100898236s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-797369
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-799568
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-799568" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-799568
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-799568: (1.023280666s)
helpers_test.go:175: Cleaning up "first-797369" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-797369
--- PASS: TestMinikubeProfile (97.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-314377 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0130 21:23:14.509601  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-314377 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.11818508s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-314377 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-314377 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (26.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-336948 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-336948 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.278518722s)
--- PASS: TestMountStart/serial/StartWithMountSecond (26.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-314377 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.23s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-336948
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-336948: (1.233137929s)
--- PASS: TestMountStart/serial/Stop (1.23s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (25.1s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-336948
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-336948: (24.101530741s)
--- PASS: TestMountStart/serial/RestartStopped (25.10s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-336948 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (124.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-721181 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0130 21:24:25.157656  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:24:32.716446  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:24:36.430296  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:25:00.402815  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
multinode_test.go:86: (dbg) Done: out/minikube-linux-amd64 start -p multinode-721181 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m3.773695201s)
multinode_test.go:92: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (124.21s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:509: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:514: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- rollout status deployment/busybox
multinode_test.go:514: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-721181 -- rollout status deployment/busybox: (3.665226066s)
multinode_test.go:521: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:544: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-9gv46 -- nslookup kubernetes.io
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-zdhbw -- nslookup kubernetes.io
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-9gv46 -- nslookup kubernetes.default
multinode_test.go:562: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-zdhbw -- nslookup kubernetes.default
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-9gv46 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:570: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-zdhbw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.77s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:580: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-9gv46 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-9gv46 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:588: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-zdhbw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:599: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-721181 -- exec busybox-5b5d89c9d6-zdhbw -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.95s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (40.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-721181 -v 3 --alsologtostderr
E0130 21:26:52.587873  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
multinode_test.go:111: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-721181 -v 3 --alsologtostderr: (39.705682131s)
multinode_test.go:117: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (40.30s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:211: (dbg) Run:  kubectl --context multinode-721181 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:133: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:174: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp testdata/cp-test.txt multinode-721181:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3145735879/001/cp-test_multinode-721181.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181:/home/docker/cp-test.txt multinode-721181-m02:/home/docker/cp-test_multinode-721181_multinode-721181-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test_multinode-721181_multinode-721181-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181:/home/docker/cp-test.txt multinode-721181-m03:/home/docker/cp-test_multinode-721181_multinode-721181-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test_multinode-721181_multinode-721181-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp testdata/cp-test.txt multinode-721181-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3145735879/001/cp-test_multinode-721181-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt multinode-721181:/home/docker/cp-test_multinode-721181-m02_multinode-721181.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test_multinode-721181-m02_multinode-721181.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m02:/home/docker/cp-test.txt multinode-721181-m03:/home/docker/cp-test_multinode-721181-m02_multinode-721181-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test_multinode-721181-m02_multinode-721181-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp testdata/cp-test.txt multinode-721181-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3145735879/001/cp-test_multinode-721181-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt multinode-721181:/home/docker/cp-test_multinode-721181-m03_multinode-721181.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181 "sudo cat /home/docker/cp-test_multinode-721181-m03_multinode-721181.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 cp multinode-721181-m03:/home/docker/cp-test.txt multinode-721181-m02:/home/docker/cp-test_multinode-721181-m03_multinode-721181-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 ssh -n multinode-721181-m02 "sudo cat /home/docker/cp-test_multinode-721181-m03_multinode-721181-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.70s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 node stop m03
multinode_test.go:238: (dbg) Done: out/minikube-linux-amd64 -p multinode-721181 node stop m03: (2.095579806s)
multinode_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status
multinode_test.go:244: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-721181 status: exit status 7 (444.265077ms)

                                                
                                                
-- stdout --
	multinode-721181
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721181-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721181-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:251: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr
multinode_test.go:251: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr: exit status 7 (431.981101ms)

                                                
                                                
-- stdout --
	multinode-721181
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-721181-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-721181-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 21:27:14.061845  663416 out.go:296] Setting OutFile to fd 1 ...
	I0130 21:27:14.061978  663416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:27:14.061987  663416 out.go:309] Setting ErrFile to fd 2...
	I0130 21:27:14.061992  663416 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 21:27:14.062208  663416 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 21:27:14.062380  663416 out.go:303] Setting JSON to false
	I0130 21:27:14.062412  663416 mustload.go:65] Loading cluster: multinode-721181
	I0130 21:27:14.062512  663416 notify.go:220] Checking for updates...
	I0130 21:27:14.062868  663416 config.go:182] Loaded profile config "multinode-721181": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 21:27:14.062888  663416 status.go:255] checking status of multinode-721181 ...
	I0130 21:27:14.063385  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.063441  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.082074  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43547
	I0130 21:27:14.082444  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.083041  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.083065  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.083367  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.083566  663416 main.go:141] libmachine: (multinode-721181) Calling .GetState
	I0130 21:27:14.085084  663416 status.go:330] multinode-721181 host status = "Running" (err=<nil>)
	I0130 21:27:14.085100  663416 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:27:14.085368  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.085410  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.099992  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46579
	I0130 21:27:14.100380  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.100836  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.100855  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.101193  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.101379  663416 main.go:141] libmachine: (multinode-721181) Calling .GetIP
	I0130 21:27:14.103876  663416 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:27:14.104238  663416 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:24:28 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:27:14.104265  663416 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:27:14.104399  663416 host.go:66] Checking if "multinode-721181" exists ...
	I0130 21:27:14.104787  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.104837  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.118594  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36875
	I0130 21:27:14.118940  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.119323  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.119346  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.119674  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.119833  663416 main.go:141] libmachine: (multinode-721181) Calling .DriverName
	I0130 21:27:14.120052  663416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 21:27:14.120089  663416 main.go:141] libmachine: (multinode-721181) Calling .GetSSHHostname
	I0130 21:27:14.122602  663416 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:27:14.122962  663416 main.go:141] libmachine: (multinode-721181) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:1b:35", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:24:28 +0000 UTC Type:0 Mac:52:54:00:d2:1b:35 Iaid: IPaddr:192.168.39.174 Prefix:24 Hostname:multinode-721181 Clientid:01:52:54:00:d2:1b:35}
	I0130 21:27:14.122994  663416 main.go:141] libmachine: (multinode-721181) DBG | domain multinode-721181 has defined IP address 192.168.39.174 and MAC address 52:54:00:d2:1b:35 in network mk-multinode-721181
	I0130 21:27:14.123125  663416 main.go:141] libmachine: (multinode-721181) Calling .GetSSHPort
	I0130 21:27:14.123277  663416 main.go:141] libmachine: (multinode-721181) Calling .GetSSHKeyPath
	I0130 21:27:14.123427  663416 main.go:141] libmachine: (multinode-721181) Calling .GetSSHUsername
	I0130 21:27:14.123534  663416 sshutil.go:53] new ssh client: &{IP:192.168.39.174 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181/id_rsa Username:docker}
	I0130 21:27:14.204752  663416 ssh_runner.go:195] Run: systemctl --version
	I0130 21:27:14.211096  663416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:27:14.223474  663416 kubeconfig.go:92] found "multinode-721181" server: "https://192.168.39.174:8443"
	I0130 21:27:14.223508  663416 api_server.go:166] Checking apiserver status ...
	I0130 21:27:14.223561  663416 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0130 21:27:14.234692  663416 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1093/cgroup
	I0130 21:27:14.242842  663416 api_server.go:182] apiserver freezer: "7:freezer:/kubepods/burstable/podf12df51f0cf7fec96f3664a9ee0f4186/crio-6cd639b4eeee33e1658232e6286da906f8af8a5769180592d3b1b2a96173a539"
	I0130 21:27:14.242902  663416 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/podf12df51f0cf7fec96f3664a9ee0f4186/crio-6cd639b4eeee33e1658232e6286da906f8af8a5769180592d3b1b2a96173a539/freezer.state
	I0130 21:27:14.252079  663416 api_server.go:204] freezer state: "THAWED"
	I0130 21:27:14.252108  663416 api_server.go:253] Checking apiserver healthz at https://192.168.39.174:8443/healthz ...
	I0130 21:27:14.257038  663416 api_server.go:279] https://192.168.39.174:8443/healthz returned 200:
	ok
	I0130 21:27:14.257064  663416 status.go:421] multinode-721181 apiserver status = Running (err=<nil>)
	I0130 21:27:14.257074  663416 status.go:257] multinode-721181 status: &{Name:multinode-721181 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0130 21:27:14.257092  663416 status.go:255] checking status of multinode-721181-m02 ...
	I0130 21:27:14.257418  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.257481  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.272291  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43923
	I0130 21:27:14.273746  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.274309  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.274344  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.274681  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.274871  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetState
	I0130 21:27:14.276450  663416 status.go:330] multinode-721181-m02 host status = "Running" (err=<nil>)
	I0130 21:27:14.276484  663416 host.go:66] Checking if "multinode-721181-m02" exists ...
	I0130 21:27:14.276765  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.276797  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.291490  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44517
	I0130 21:27:14.291932  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.292424  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.292451  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.292740  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.292914  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetIP
	I0130 21:27:14.295422  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:27:14.295820  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:27:14.295847  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:27:14.295993  663416 host.go:66] Checking if "multinode-721181-m02" exists ...
	I0130 21:27:14.296292  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.296327  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.310829  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45027
	I0130 21:27:14.311220  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.311733  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.311752  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.312058  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.312232  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .DriverName
	I0130 21:27:14.312412  663416 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0130 21:27:14.312436  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHHostname
	I0130 21:27:14.315180  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:27:14.315584  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:08:34:03", ip: ""} in network mk-multinode-721181: {Iface:virbr1 ExpiryTime:2024-01-30 22:25:34 +0000 UTC Type:0 Mac:52:54:00:08:34:03 Iaid: IPaddr:192.168.39.69 Prefix:24 Hostname:multinode-721181-m02 Clientid:01:52:54:00:08:34:03}
	I0130 21:27:14.315612  663416 main.go:141] libmachine: (multinode-721181-m02) DBG | domain multinode-721181-m02 has defined IP address 192.168.39.69 and MAC address 52:54:00:08:34:03 in network mk-multinode-721181
	I0130 21:27:14.315739  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHPort
	I0130 21:27:14.315893  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHKeyPath
	I0130 21:27:14.316034  663416 main.go:141] libmachine: (multinode-721181-m02) Calling .GetSSHUsername
	I0130 21:27:14.316165  663416 sshutil.go:53] new ssh client: &{IP:192.168.39.69 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18014-640473/.minikube/machines/multinode-721181-m02/id_rsa Username:docker}
	I0130 21:27:14.404280  663416 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0130 21:27:14.416063  663416 status.go:257] multinode-721181-m02 status: &{Name:multinode-721181-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0130 21:27:14.416102  663416 status.go:255] checking status of multinode-721181-m03 ...
	I0130 21:27:14.416438  663416 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0130 21:27:14.416490  663416 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0130 21:27:14.432133  663416 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41049
	I0130 21:27:14.432515  663416 main.go:141] libmachine: () Calling .GetVersion
	I0130 21:27:14.432962  663416 main.go:141] libmachine: Using API Version  1
	I0130 21:27:14.432981  663416 main.go:141] libmachine: () Calling .SetConfigRaw
	I0130 21:27:14.433284  663416 main.go:141] libmachine: () Calling .GetMachineName
	I0130 21:27:14.433484  663416 main.go:141] libmachine: (multinode-721181-m03) Calling .GetState
	I0130 21:27:14.434981  663416 status.go:330] multinode-721181-m03 host status = "Stopped" (err=<nil>)
	I0130 21:27:14.434996  663416 status.go:343] host is not running, skipping remaining checks
	I0130 21:27:14.435001  663416 status.go:257] multinode-721181-m03 status: &{Name:multinode-721181-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.97s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (30.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 node start m03 --alsologtostderr
E0130 21:27:20.272134  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-721181 node start m03 --alsologtostderr: (29.525685986s)
multinode_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status
multinode_test.go:303: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (30.17s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 node delete m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p multinode-721181 node delete m03: (1.214075744s)
multinode_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr
multinode_test.go:452: (dbg) Run:  kubectl get nodes
multinode_test.go:460: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.76s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (440.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-721181 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0130 21:41:52.587034  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:44:25.156984  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:44:32.716793  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:46:52.587271  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 21:47:28.206602  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
multinode_test.go:382: (dbg) Done: out/minikube-linux-amd64 start -p multinode-721181 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (7m19.708114756s)
multinode_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p multinode-721181 status --alsologtostderr
multinode_test.go:402: (dbg) Run:  kubectl get nodes
multinode_test.go:410: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (440.26s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:471: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-721181
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-721181-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:480: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-721181-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (78.733396ms)

                                                
                                                
-- stdout --
	* [multinode-721181-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-721181-m02' is duplicated with machine name 'multinode-721181-m02' in profile 'multinode-721181'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:488: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-721181-m03 --driver=kvm2  --container-runtime=crio
E0130 21:49:25.157645  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:49:32.717279  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
multinode_test.go:488: (dbg) Done: out/minikube-linux-amd64 start -p multinode-721181-m03 --driver=kvm2  --container-runtime=crio: (46.284187136s)
multinode_test.go:495: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-721181
multinode_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-721181: exit status 80 (241.864281ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-721181
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-721181-m03 already exists in multinode-721181-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:500: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-721181-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.46s)

                                                
                                    
x
+
TestScheduledStopUnix (121.84s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-488280 --memory=2048 --driver=kvm2  --container-runtime=crio
E0130 21:54:32.716312  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 21:54:55.634228  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-488280 --memory=2048 --driver=kvm2  --container-runtime=crio: (50.047824886s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-488280 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-488280 -n scheduled-stop-488280
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-488280 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-488280 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-488280 -n scheduled-stop-488280
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-488280
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-488280 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-488280
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-488280: exit status 7 (79.121513ms)

                                                
                                                
-- stdout --
	scheduled-stop-488280
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-488280 -n scheduled-stop-488280
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-488280 -n scheduled-stop-488280: exit status 7 (81.34701ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-488280" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-488280
--- PASS: TestScheduledStopUnix (121.84s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (236.4s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2097231069 start -p running-upgrade-676332 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0130 21:56:52.587675  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2097231069 start -p running-upgrade-676332 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m15.350980518s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-676332 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-676332 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m39.0018617s)
helpers_test.go:175: Cleaning up "running-upgrade-676332" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-676332
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-676332: (1.188616324s)
--- PASS: TestRunningBinaryUpgrade (236.40s)

                                                
                                    
x
+
TestKubernetesUpgrade (226.6s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m33.984533887s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-433652
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-433652: (4.1151818s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-433652 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-433652 status --format={{.Host}}: exit status 7 (87.867069ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.568382442s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-433652 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (98.349574ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-433652] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-433652
	    minikube start -p kubernetes-upgrade-433652 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4336522 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-433652 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-433652 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m20.39994914s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-433652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-433652
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-433652: (1.292251035s)
--- PASS: TestKubernetesUpgrade (226.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (94.160536ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-667473] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (102.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667473 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667473 --driver=kvm2  --container-runtime=crio: (1m42.47843547s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-667473 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (102.75s)

                                                
                                    
x
+
TestPause/serial/Start (145.2s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-330608 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-330608 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (2m25.200706895s)
--- PASS: TestPause/serial/Start (145.20s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (71.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m9.882025352s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-667473 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-667473 status -o json: exit status 2 (278.16446ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-667473","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-667473
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-667473: (1.077950662s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (71.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (27.57s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0130 21:59:25.156861  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 21:59:32.716436  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667473 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.566373253s)
--- PASS: TestNoKubernetes/serial/Start (27.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-667473 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-667473 "sudo systemctl is-active --quiet service kubelet": exit status 1 (211.104528ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (14.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (13.587534737s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (14.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (36.08s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-330608 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-330608 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.051132123s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (36.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-667473
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-667473: (1.236487561s)
--- PASS: TestNoKubernetes/serial/Stop (1.24s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (28.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-667473 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-667473 --driver=kvm2  --container-runtime=crio: (28.18245852s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (28.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-667473 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-667473 "sudo systemctl is-active --quiet service kubelet": exit status 1 (230.748156ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.23s)

                                                
                                    
x
+
TestPause/serial/Pause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-330608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.86s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.28s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-330608 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-330608 --output=json --layout=cluster: exit status 2 (278.024701ms)

                                                
                                                
-- stdout --
	{"Name":"pause-330608","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-330608","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.28s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.71s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-330608 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.71s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-330608 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.93s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.01s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-330608 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-330608 --alsologtostderr -v=5: (1.007791907s)
--- PASS: TestPause/serial/DeletePaused (1.01s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-381927 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-381927 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (127.547367ms)

                                                
                                                
-- stdout --
	* [false-381927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18014
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0130 22:00:45.533820  674537 out.go:296] Setting OutFile to fd 1 ...
	I0130 22:00:45.533990  674537 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:00:45.534002  674537 out.go:309] Setting ErrFile to fd 2...
	I0130 22:00:45.534008  674537 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0130 22:00:45.534316  674537 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18014-640473/.minikube/bin
	I0130 22:00:45.535069  674537 out.go:303] Setting JSON to false
	I0130 22:00:45.536393  674537 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9798,"bootTime":1706642248,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1049-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0130 22:00:45.536484  674537 start.go:138] virtualization: kvm guest
	I0130 22:00:45.538994  674537 out.go:177] * [false-381927] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0130 22:00:45.540536  674537 out.go:177]   - MINIKUBE_LOCATION=18014
	I0130 22:00:45.540585  674537 notify.go:220] Checking for updates...
	I0130 22:00:45.543508  674537 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0130 22:00:45.544834  674537 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18014-640473/kubeconfig
	I0130 22:00:45.546198  674537 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18014-640473/.minikube
	I0130 22:00:45.547546  674537 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0130 22:00:45.548739  674537 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0130 22:00:45.550455  674537 config.go:182] Loaded profile config "cert-expiration-822826": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:00:45.550556  674537 config.go:182] Loaded profile config "force-systemd-flag-509303": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0130 22:00:45.550643  674537 config.go:182] Loaded profile config "kubernetes-upgrade-433652": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0130 22:00:45.550738  674537 driver.go:392] Setting default libvirt URI to qemu:///system
	I0130 22:00:45.587948  674537 out.go:177] * Using the kvm2 driver based on user configuration
	I0130 22:00:45.589509  674537 start.go:298] selected driver: kvm2
	I0130 22:00:45.589526  674537 start.go:902] validating driver "kvm2" against <nil>
	I0130 22:00:45.589540  674537 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0130 22:00:45.591722  674537 out.go:177] 
	W0130 22:00:45.592988  674537 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0130 22:00:45.594218  674537 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-381927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.174:8443
name: cert-expiration-822826
contexts:
- context:
cluster: cert-expiration-822826
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-822826
name: cert-expiration-822826
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-822826
user:
client-certificate: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.crt
client-key: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-381927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-381927"

                                                
                                                
----------------------- debugLogs end: false-381927 [took: 3.353483375s] --------------------------------
helpers_test.go:175: Cleaning up "false-381927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-381927
--- PASS: TestNetworkPlugins/group/false (3.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (153.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3636227235 start -p stopped-upgrade-742001 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3636227235 start -p stopped-upgrade-742001 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m22.990320935s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3636227235 -p stopped-upgrade-742001 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3636227235 -p stopped-upgrade-742001 stop: (2.143383485s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-742001 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-742001 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m8.521877863s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (153.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (178.43s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-912992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-912992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (2m58.425858745s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (178.43s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-742001
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.91s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (150.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-023824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-023824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m30.468225045s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (150.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (133.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-713938 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0130 22:04:08.207577  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-713938 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m13.564534077s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (133.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (125.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-850803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0130 22:04:25.157381  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:04:32.717026  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-850803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (2m5.943269775s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (125.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-912992 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [a24f5188-6b75-4de9-8a25-84a67697bd40] Pending
helpers_test.go:344: "busybox" [a24f5188-6b75-4de9-8a25-84a67697bd40] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [a24f5188-6b75-4de9-8a25-84a67697bd40] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.004347144s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-912992 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-912992 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-912992 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-023824 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af881862-9b45-4215-b6ba-3a0f09571fdc] Pending
helpers_test.go:344: "busybox" [af881862-9b45-4215-b6ba-3a0f09571fdc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af881862-9b45-4215-b6ba-3a0f09571fdc] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.005672754s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-023824 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-713938 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03b0b65f-fb89-42aa-9b45-0cd925b486c0] Pending
helpers_test.go:344: "busybox" [03b0b65f-fb89-42aa-9b45-0cd925b486c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03b0b65f-fb89-42aa-9b45-0cd925b486c0] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004305623s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-713938 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-023824 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-023824 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-713938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-713938 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.069690016s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-713938 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [737188ce-94e9-46c8-a942-00a179271104] Pending
helpers_test.go:344: "busybox" [737188ce-94e9-46c8-a942-00a179271104] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [737188ce-94e9-46c8-a942-00a179271104] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004493737s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-850803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-850803 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.022496335s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-850803 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (408.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-912992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-912992 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (6m47.782690251s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-912992 -n old-k8s-version-912992
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (408.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (619.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-023824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-023824 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (10m19.075898269s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-023824 -n no-preload-023824
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (619.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (895.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-713938 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-713938 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (14m54.74332971s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-713938 -n embed-certs-713938
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (895.03s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (901.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-850803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0130 22:09:15.765278  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:09:25.157628  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:09:32.716640  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
E0130 22:11:35.635215  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 22:11:52.587724  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
E0130 22:14:25.157278  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory
E0130 22:14:32.716594  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-850803 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (15m1.66500602s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-850803 -n default-k8s-diff-port-850803
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (901.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (58.81s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-507807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-507807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (58.813185083s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (58.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (105.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m45.640422292s)
--- PASS: TestNetworkPlugins/group/auto/Start (105.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-507807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-507807 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.13815462s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (3.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-507807 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-507807 --alsologtostderr -v=3: (3.140515978s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (3.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-507807 -n newest-cni-507807
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-507807 -n newest-cni-507807: exit status 7 (104.644231ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-507807 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (69.06s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-507807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-507807 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m8.694905036s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-507807 -n newest-cni-507807
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (69.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (107.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m47.92204935s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (107.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (128.93s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0130 22:34:32.716782  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/functional-500919/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (2m8.926554032s)
--- PASS: TestNetworkPlugins/group/calico/Start (128.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-507807 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.7s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-507807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-507807 --alsologtostderr -v=1: (1.282935039s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-507807 -n newest-cni-507807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-507807 -n newest-cni-507807: exit status 2 (291.916819ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-507807 -n newest-cni-507807
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-507807 -n newest-cni-507807: exit status 2 (279.302474ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-507807 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 unpause -p newest-cni-507807 --alsologtostderr -v=1: (1.041472794s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-507807 -n newest-cni-507807
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-507807 -n newest-cni-507807
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.70s)
E0130 22:37:24.945627  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.crt: no such file or directory
E0130 22:37:28.209886  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/addons-444608/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (107.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m47.440902526s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (107.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (15.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-crz55" [423aa935-226b-4a03-84a5-85b60638aa01] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-crz55" [423aa935-226b-4a03-84a5-85b60638aa01] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 15.005026781s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (15.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-dwg5z" [8ac0d5e0-0733-493e-9cf2-7f65e48a7015] Running
E0130 22:35:42.236241  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.241541  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.251827  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.272139  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.312510  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.393567  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.554427  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:42.875119  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:43.515685  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:35:44.796622  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.007366337s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-381927 "pgrep -a kubelet"
E0130 22:35:47.356946  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (13.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c6s5x" [1ae8593c-c097-431a-920d-e7c0791e59e2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 22:35:52.477428  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-c6s5x" [1ae8593c-c097-431a-920d-e7c0791e59e2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 13.004712508s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (13.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (78.92s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m18.921172512s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (78.92s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (104.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
E0130 22:36:23.199201  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
E0130 22:36:23.504713  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.crt: no such file or directory
E0130 22:36:28.226591  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.231898  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.241993  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.262334  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.302711  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.383038  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.543490  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:28.863666  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:29.504306  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:30.784870  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
E0130 22:36:33.345100  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m44.673234797s)
--- PASS: TestNetworkPlugins/group/flannel/Start (104.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-z4fts" [29b93296-f751-42b0-a74d-f3ea189716c4] Running
E0130 22:36:38.466051  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006043407s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gbfm8" [a1eb045b-7906-436b-b399-ec51fe6b6f6f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0130 22:36:43.985381  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/no-preload-023824/client.crt: no such file or directory
E0130 22:36:48.706500  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/default-k8s-diff-port-850803/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gbfm8" [a1eb045b-7906-436b-b399-ec51fe6b6f6f] Running
E0130 22:36:52.587696  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/ingress-addon-legacy-298651/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.005722774s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-92n6l" [b8aa5090-bd5c-4caa-b29f-7ac76e550b7b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-92n6l" [b8aa5090-bd5c-4caa-b29f-7ac76e550b7b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.006137888s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0130 22:37:04.159827  647718 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/old-k8s-version-912992/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-c5xg5" [5f92c940-64e6-4619-b924-d3172369f231] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-c5xg5" [5f92c940-64e6-4619-b924-d3172369f231] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 13.005279204s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (13.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (93.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-381927 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m33.464557655s)
--- PASS: TestNetworkPlugins/group/bridge/Start (93.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-v7szr" [0342f751-0e8a-4225-b442-703eb6064abb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.006763183s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (13.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-lxlz7" [60cdd715-4e22-47b0-8b58-fdc8ba990d31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-lxlz7" [60cdd715-4e22-47b0-8b58-fdc8ba990d31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 13.004302428s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (13.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-381927 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-381927 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-8mwjd" [fff7c2a0-68eb-49aa-b603-ac06aeb7fd4d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-8mwjd" [fff7c2a0-68eb-49aa-b603-ac06aeb7fd4d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.005109895s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-381927 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-381927 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    

Test skip (39/310)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
135 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
136 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
137 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
140 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
141 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
142 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
163 TestImageBuild 0
196 TestKicCustomNetwork 0
197 TestKicExistingNetwork 0
198 TestKicCustomSubnet 0
199 TestKicStaticIP 0
231 TestChangeNoneUser 0
234 TestScheduledStopWindows 0
236 TestSkaffold 0
238 TestInsufficientStorage 0
242 TestMissingContainerUpgrade 0
251 TestStartStop/group/disable-driver-mounts 0.16
271 TestNetworkPlugins/group/kubenet 3.72
279 TestNetworkPlugins/group/cilium 3.67
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-818908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-818908
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:523: 
----------------------- debugLogs start: kubenet-381927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.174:8443
name: cert-expiration-822826
contexts:
- context:
cluster: cert-expiration-822826
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-822826
name: cert-expiration-822826
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-822826
user:
client-certificate: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.crt
client-key: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-381927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-381927"

                                                
                                                
----------------------- debugLogs end: kubenet-381927 [took: 3.560433325s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-381927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-381927
--- SKIP: TestNetworkPlugins/group/kubenet (3.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:523: 
----------------------- debugLogs start: cilium-381927 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-381927" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18014-640473/.minikube/ca.crt
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.174:8443
name: cert-expiration-822826
contexts:
- context:
cluster: cert-expiration-822826
extensions:
- extension:
last-update: Tue, 30 Jan 2024 21:59:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: cert-expiration-822826
name: cert-expiration-822826
current-context: ""
kind: Config
preferences: {}
users:
- name: cert-expiration-822826
user:
client-certificate: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.crt
client-key: /home/jenkins/minikube-integration/18014-640473/.minikube/profiles/cert-expiration-822826/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-381927

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-381927" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-381927"

                                                
                                                
----------------------- debugLogs end: cilium-381927 [took: 3.524286232s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-381927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-381927
--- SKIP: TestNetworkPlugins/group/cilium (3.67s)

                                                
                                    
Copied to clipboard